Details

    • Type: Sub-task Sub-task
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Duplicate
    • Affects Version/s: fs-encryption (HADOOP-10150 and HDFS-6134)
    • Fix Version/s: None
    • Component/s: security
    • Labels:
      None

      Description

      Currently, users that want to remove an encrypted directory using the FsShell remove commands need to skip the trash.

      If users try to remove an encrypted directory while Trash is enabled, they will see the following error:

      [hdfs@schu-enc2 ~]$ hdfs dfs -rm -r /user/hdfs/enc
      2014-07-29 13:47:28,799 INFO  [main] hdfs.DFSClient (DFSClient.java:<init>(604)) - Found KeyProvider: KeyProviderCryptoExtension: jceks://file@/home/hdfs/hadoop-data/test.jks
      2014-07-29 13:47:29,563 INFO  [main] fs.TrashPolicyDefault (TrashPolicyDefault.java:initialize(92)) - Namenode trash configuration: Deletion interval = 1440 minutes, Emptier interval = 0 minutes.
      rm: Failed to move to trash: hdfs://schu-enc2.vpc.com:8020/user/hdfs/enc. Consider using -skipTrash option
      

      This is because the encrypted dir cannot be moved from an encryption zone, as the NN log explains:

      2014-07-29 13:47:29,596 INFO  [IPC Server handler 8 on 8020] ipc.Server (Server.java:run(2120)) - IPC Server handler 8 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.rename from 172.25.3.153:48295 Call#9 Retry#0
      java.io.IOException: /user/hdfs/enc can't be moved from an encryption zone.
      	at org.apache.hadoop.hdfs.server.namenode.EncryptionZoneManager.checkMoveValidity(EncryptionZoneManager.java:175)
      	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedRenameTo(FSDirectory.java:526)
      	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.renameTo(FSDirectory.java:440)
      	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameToInternal(FSNamesystem.java:3593)
      	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameToInt(FSNamesystem.java:3555)
      	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3522)
      	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:727)
      	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:542)
      	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
      	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:607)
      	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)
      	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2099)
      	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2095)
      	at java.security.AccessController.doPrivileged(Native Method)
      	at javax.security.auth.Subject.doAs(Subject.java:415)
      	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1626)
      	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
      

        Activity

        No work has yet been logged on this issue.

          People

          • Assignee:
            Unassigned
            Reporter:
            Stephen Chu
          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development