Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-23867

Truncate table fail with AccessControlException if doAs enabled and tbl database has source of replication

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 3.1.1
    • Fix Version/s: None
    • Component/s: Hive, repl
    • Labels:
      None

      Description

      Steps to repro:

      1. enable doAs
      2. with some user (not a super user) create database
      create database sampledb with dbproperties('repl.source.for'='1,2,3');
      3. create table using create table sampledb.sampletble (id int);
      4. insert some data into it insert into sampledb.sampletble values (1), (2),(3);
      5. Run truncate command on the table which fail with following error

       org.apache.hadoop.ipc.RemoteException: User username is not a super user (non-super user cannot change owner).
           at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setOwner(FSDirAttrOp.java:85)
           at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setOwner(FSNamesystem.java:1907)
           at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setOwner(NameNodeRpcServer.java:866)
           at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setOwner(ClientNamenodeProtocolServerSideTranslatorPB.java:531)
           at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
           at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
           at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
           at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
       
           at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1498) ~[hadoop-common-3.1.1.3.1.5.0-152.jar:?]
           at org.apache.hadoop.ipc.Client.call(Client.java:1444) ~[hadoop-common-3.1.1.3.1.5.0-152.jar:?]
           at org.apache.hadoop.ipc.Client.call(Client.java:1354) ~[hadoop-common-3.1.1.3.1.5.0-152.jar:?]
           at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) ~[hadoop-common-3.1.1.3.1.5.0-152.jar:?]
           at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) ~[hadoop-common-3.1.1.3.1.5.0-152.jar:?]
           at com.sun.proxy.$Proxy31.setOwner(Unknown Source) ~[?:?]
           at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setOwner(ClientNamenodeProtocolTranslatorPB.java:470) ~[hadoop-hdfs-client-3.1.1.3.1.5.0-152.jar:?]
           at sun.reflect.GeneratedMethodAccessor151.invoke(Unknown Source) ~[?:?]
           at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_232]
           at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_232]
           at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) [hadoop-common-3.1.1.3.1.5.0-152.jar:?]
           at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) ~[hadoop-common-3.1.1.3.1.5.0-152.jar:?]
           at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) ~[hadoop-common-3.1.1.3.1.5.0-152.jar:?]
           at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) [hadoop-common-3.1.1.3.1.5.0-152.jar:?]
           at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) [hadoop-common-3.1.1.3.1.5.0-152.jar:?]
           at com.sun.proxy.$Proxy32.setOwner(Unknown Source) [?:?]
           at org.apache.hadoop.hdfs.DFSClient.setOwner(DFSClient.java:1914) [hadoop-hdfs-client-3.1.1.3.1.5.0-152.jar:?]
           at org.apache.hadoop.hdfs.DistributedFileSystem$36.doCall(DistributedFileSystem.java:1764) [hadoop-hdfs-client-3.1.1.3.1.5.0-152.jar:?]
           at org.apache.hadoop.hdfs.DistributedFileSystem$36.doCall(DistributedFileSystem.java:1761) [hadoop-hdfs-client-3.1.1.3.1.5.0-152.jar:?]
           at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-3.1.1.3.1.5.0-152.jar:?]
           at org.apache.hadoop.hdfs.DistributedFileSystem.setOwner(DistributedFileSystem.java:1774) [hadoop-hdfs-client-3.1.1.3.1.5.0-152.jar:?]
           at org.apache.hadoop.hive.metastore.ReplChangeManager.recycle(ReplChangeManager.java:238) [hive-exec-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
           at org.apache.hadoop.hive.metastore.ReplChangeManager.recycle(ReplChangeManager.java:191) [hive-exec-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
           at org.apache.hadoop.hive.metastore.ReplChangeManager.recycle(ReplChangeManager.java:191) [hive-exec-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
           at org.apache.hadoop.hive.metastore.Warehouse.deleteDir(Warehouse.java:395) [hive-exec-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
           at org.apache.hadoop.hive.metastore.Warehouse.deleteDir(Warehouse.java:389) [hive-exec-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
           at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.truncateTableInternal(HiveMetaStore.java:3167) [hive-exec-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
           at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.truncate_table_req(HiveMetaStore.java:3145) [hive-exec-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
      

      The reason is, hive uses fs objected created by the client ugi which is not in superuser list hence fs call throws the Error, we should create a fs handle based on the current user instead to make it work.

      https://github.com/apache/hive/blob/master/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/ReplChangeManager.java#L230

      https://github.com/apache/hive/blob/master/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/ReplChangeManager.java#L290

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              Rajkumar Singh Rajkumar Singh
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated: