Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-14705

Add batched interface reencryptEncryptedKeys to KMS

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 3.0.0-beta1
    • Component/s: kms
    • Labels:
      None

      Description

      HADOOP-13827 already enabled the KMS to re-encrypt a EncryptedKeyVersion.

      As the performance results of HDFS-10899 turns out, communication overhead with the KMS occupies the majority of the time. So this jira proposes to add a batched interface to re-encrypt multiple EDEKs in 1 call.

      1. HADOOP-14705.01.patch
        42 kB
        Xiao Chen
      2. HADOOP-14705.02.patch
        46 kB
        Xiao Chen
      3. HADOOP-14705.03.patch
        59 kB
        Xiao Chen
      4. HADOOP-14705.04.patch
        58 kB
        Xiao Chen
      5. HADOOP-14705.05.patch
        58 kB
        Xiao Chen
      6. HADOOP-14705.06.patch
        61 kB
        Xiao Chen
      7. HADOOP-14705.07.patch
        62 kB
        Xiao Chen
      8. HADOOP-14705.08.patch
        62 kB
        Xiao Chen
      9. HADOOP-14705.09.patch
        65 kB
        Xiao Chen
      10. HADOOP-14705.10.patch
        65 kB
        Xiao Chen
      11. HADOOP-14705.11.patch
        66 kB
        Xiao Chen

        Issue Links

          Activity

          Hide
          xiaochen Xiao Chen added a comment -

          This was committed to trunk. Thanks again

          Show
          xiaochen Xiao Chen added a comment - This was committed to trunk. Thanks again
          Hide
          hudson Hudson added a comment -

          ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #12225 (See https://builds.apache.org/job/Hadoop-trunk-Commit/12225/)
          HADOOP-14705. Add batched interface reencryptEncryptedKeys to KMS. (xiao: rev 4ec5acc70418a3f2327cf83ecae1789a057fdd99)

          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/KMSUtil.java
          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSRESTConstants.java
          • (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java
          • (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
          • (edit) hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java
          • (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderCryptoExtension.java
          • (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSServerJSONUtils.java
          • (edit) hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMSAudit.java
          • (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/EagerKeyGeneratorKeyProviderCryptoExtension.java
          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
          • (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
          • (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
          • (edit) hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
          Show
          hudson Hudson added a comment - ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #12225 (See https://builds.apache.org/job/Hadoop-trunk-Commit/12225/ ) HADOOP-14705 . Add batched interface reencryptEncryptedKeys to KMS. (xiao: rev 4ec5acc70418a3f2327cf83ecae1789a057fdd99) (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/KMSUtil.java (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSRESTConstants.java (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java (edit) hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderCryptoExtension.java (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSServerJSONUtils.java (edit) hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMSAudit.java (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/EagerKeyGeneratorKeyProviderCryptoExtension.java (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java (edit) hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
          Hide
          xiaochen Xiao Chen added a comment -

          Committing this given Wei-Chiu Chuang's pending +1.

          Thanks for the reviews Wei-Chiu and Rushabh!

          Show
          xiaochen Xiao Chen added a comment - Committing this given Wei-Chiu Chuang 's pending +1. Thanks for the reviews Wei-Chiu and Rushabh!
          Hide
          shahrs87 Rushabh S Shah added a comment -

          +1 (non-binding) ltgm.
          Thanks Xiao Chen for being so patient and following up with the review changes immediately.

          Show
          shahrs87 Rushabh S Shah added a comment - +1 (non-binding) ltgm. Thanks Xiao Chen for being so patient and following up with the review changes immediately.
          Hide
          hadoopqa Hadoop QA added a comment -
          +1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 17s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
                trunk Compile Tests
          0 mvndep 0m 20s Maven dependency ordering for branch
          +1 mvninstall 15m 1s trunk passed
          +1 compile 18m 49s trunk passed
          +1 checkstyle 0m 49s trunk passed
          +1 mvnsite 2m 48s trunk passed
          +1 findbugs 2m 25s trunk passed
          +1 javadoc 1m 30s trunk passed
                Patch Compile Tests
          0 mvndep 0m 10s Maven dependency ordering for patch
          +1 mvninstall 1m 19s the patch passed
          +1 compile 15m 19s the patch passed
          +1 javac 15m 19s the patch passed
          +1 checkstyle 0m 49s hadoop-common-project: The patch generated 0 new + 148 unchanged - 3 fixed = 148 total (was 151)
          +1 mvnsite 2m 34s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 2m 36s the patch passed
          +1 javadoc 1m 23s the patch passed
                Other Tests
          +1 unit 10m 28s hadoop-common in the patch passed.
          +1 unit 3m 19s hadoop-kms in the patch passed.
          +1 asflicense 0m 34s The patch does not generate ASF License warnings.
          85m 6s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HADOOP-14705
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12883037/HADOOP-14705.11.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux b18cbcbbf34b 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / b6bfb2f
          Default Java 1.8.0_144
          findbugs v3.1.0-RC1
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13088/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13088/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.       trunk Compile Tests 0 mvndep 0m 20s Maven dependency ordering for branch +1 mvninstall 15m 1s trunk passed +1 compile 18m 49s trunk passed +1 checkstyle 0m 49s trunk passed +1 mvnsite 2m 48s trunk passed +1 findbugs 2m 25s trunk passed +1 javadoc 1m 30s trunk passed       Patch Compile Tests 0 mvndep 0m 10s Maven dependency ordering for patch +1 mvninstall 1m 19s the patch passed +1 compile 15m 19s the patch passed +1 javac 15m 19s the patch passed +1 checkstyle 0m 49s hadoop-common-project: The patch generated 0 new + 148 unchanged - 3 fixed = 148 total (was 151) +1 mvnsite 2m 34s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 36s the patch passed +1 javadoc 1m 23s the patch passed       Other Tests +1 unit 10m 28s hadoop-common in the patch passed. +1 unit 3m 19s hadoop-kms in the patch passed. +1 asflicense 0m 34s The patch does not generate ASF License warnings. 85m 6s Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HADOOP-14705 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12883037/HADOOP-14705.11.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux b18cbcbbf34b 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / b6bfb2f Default Java 1.8.0_144 findbugs v3.1.0-RC1 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13088/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13088/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xiaochen Xiao Chen added a comment -

          Patch 11 to address comments from Rushabh. Thanks for the review, hopefully #11 works.

          The check here is just a safety check to make sure it's not insanely huge. Not setting to the same level of NN because by KMS as part of hadoop-common should not depend on HDFS NN.

          For the test comment, I added the '// Decrypt it again and it should be the same' test. Don't think we need the '// Generate another EEK and make sure it's different from the first' since we're already comparing with the original EEKs, which shouldn't be the same, which is covered by existing test in TestGenerate.

          Show
          xiaochen Xiao Chen added a comment - Patch 11 to address comments from Rushabh. Thanks for the review, hopefully #11 works. The check here is just a safety check to make sure it's not insanely huge. Not setting to the same level of NN because by KMS as part of hadoop-common should not depend on HDFS NN. For the test comment, I added the '// Decrypt it again and it should be the same' test. Don't think we need the '// Generate another EEK and make sure it's different from the first' since we're already comparing with the original EEKs, which shouldn't be the same, which is covered by existing test in TestGenerate.
          Hide
          shahrs87 Rushabh S Shah added a comment - - edited

          The last patch looks very very close.

          Not on the request itself, but the client sending it and the server receiving it both need to be able to hold and parse it. As it turned out from HDFS-10899, bigger than 2k may trigger edit log sync and impact performance.
          For KMS here, I added a static 10k maxNumPerBatch as a safeguard too. Security-wise okay be cause ACL is checked before iterating through the json payload.

          If I understand this comment correctly, for a batch of more than 2000, it will impact namenode performance.
          Then why do we have limit of 5x on server side.
          Am I missing something ?

          Couple of minor comments.
          TestKeyProviderCryptoExtension.java
          Below line is from patch.

          // Verify the decrypting the new EEK and orig EEK gives the same material.

          We should add the same check in TestKeyProviderCryptoExtension#testGenerateEncryptedKey also.

          KMS.java
          Below line is from patch.

          LOG.trace("Exiting handleEncryptedKeyOp method.");

          • Log line is carried over from handleEncryptedKeyOp method.
          • If I read the existing method code properly, the log line Exiting <methodName> is logged only when the call completes gracefully.
            In case of any exceptions, all other calls logs the exception at debug level and don't log the exiting log line.
            Just to be consistent, we should also follow the same pattern in reencryptEncryptedKeys method.
          Show
          shahrs87 Rushabh S Shah added a comment - - edited The last patch looks very very close. Not on the request itself, but the client sending it and the server receiving it both need to be able to hold and parse it. As it turned out from HDFS-10899 , bigger than 2k may trigger edit log sync and impact performance. For KMS here, I added a static 10k maxNumPerBatch as a safeguard too. Security-wise okay be cause ACL is checked before iterating through the json payload. If I understand this comment correctly, for a batch of more than 2000, it will impact namenode performance. Then why do we have limit of 5x on server side. Am I missing something ? Couple of minor comments. TestKeyProviderCryptoExtension.java Below line is from patch. // Verify the decrypting the new EEK and orig EEK gives the same material. We should add the same check in TestKeyProviderCryptoExtension#testGenerateEncryptedKey also. KMS.java Below line is from patch. LOG.trace("Exiting handleEncryptedKeyOp method."); Log line is carried over from handleEncryptedKeyOp method. If I read the existing method code properly, the log line Exiting <methodName> is logged only when the call completes gracefully . In case of any exceptions, all other calls logs the exception at debug level and don't log the exiting log line. Just to be consistent, we should also follow the same pattern in reencryptEncryptedKeys method.
          Hide
          shahrs87 Rushabh S Shah added a comment -

          I will take a final pass today or max tomorrow.

          Show
          shahrs87 Rushabh S Shah added a comment - I will take a final pass today or max tomorrow.
          Hide
          xiaochen Xiao Chen added a comment -

          Same KDiag failure, tracked at HADOOP-14030.
          Plan to commit EOB today, unless Rushabh S Shah or other watchers have additional comments.

          Show
          xiaochen Xiao Chen added a comment - Same KDiag failure, tracked at HADOOP-14030 . Plan to commit EOB today, unless Rushabh S Shah or other watchers have additional comments.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 17s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
                trunk Compile Tests
          0 mvndep 0m 15s Maven dependency ordering for branch
          +1 mvninstall 13m 46s trunk passed
          +1 compile 14m 8s trunk passed
          +1 checkstyle 0m 40s trunk passed
          +1 mvnsite 2m 16s trunk passed
          +1 findbugs 1m 54s trunk passed
          +1 javadoc 1m 10s trunk passed
                Patch Compile Tests
          0 mvndep 0m 9s Maven dependency ordering for patch
          +1 mvninstall 0m 54s the patch passed
          +1 compile 10m 35s the patch passed
          +1 javac 10m 35s the patch passed
          +1 checkstyle 0m 41s hadoop-common-project: The patch generated 0 new + 149 unchanged - 3 fixed = 149 total (was 152)
          +1 mvnsite 2m 16s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 2m 10s the patch passed
          +1 javadoc 1m 8s the patch passed
                Other Tests
          -1 unit 8m 6s hadoop-common in the patch failed.
          +1 unit 3m 2s hadoop-kms in the patch passed.
          +1 asflicense 0m 28s The patch does not generate ASF License warnings.
          68m 12s



          Reason Tests
          Failed junit tests hadoop.security.TestKDiag



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HADOOP-14705
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882821/HADOOP-14705.10.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 7cda21a3000d 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 267e19a
          Default Java 1.8.0_144
          findbugs v3.1.0-RC1
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13079/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13079/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13079/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.       trunk Compile Tests 0 mvndep 0m 15s Maven dependency ordering for branch +1 mvninstall 13m 46s trunk passed +1 compile 14m 8s trunk passed +1 checkstyle 0m 40s trunk passed +1 mvnsite 2m 16s trunk passed +1 findbugs 1m 54s trunk passed +1 javadoc 1m 10s trunk passed       Patch Compile Tests 0 mvndep 0m 9s Maven dependency ordering for patch +1 mvninstall 0m 54s the patch passed +1 compile 10m 35s the patch passed +1 javac 10m 35s the patch passed +1 checkstyle 0m 41s hadoop-common-project: The patch generated 0 new + 149 unchanged - 3 fixed = 149 total (was 152) +1 mvnsite 2m 16s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 10s the patch passed +1 javadoc 1m 8s the patch passed       Other Tests -1 unit 8m 6s hadoop-common in the patch failed. +1 unit 3m 2s hadoop-kms in the patch passed. +1 asflicense 0m 28s The patch does not generate ASF License warnings. 68m 12s Reason Tests Failed junit tests hadoop.security.TestKDiag Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HADOOP-14705 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882821/HADOOP-14705.10.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 7cda21a3000d 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 267e19a Default Java 1.8.0_144 findbugs v3.1.0-RC1 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13079/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13079/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13079/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xiaochen Xiao Chen added a comment -

          Thanks for the review Wei-Chiu Chuang, patch 10 to address all comments.

          Show
          xiaochen Xiao Chen added a comment - Thanks for the review Wei-Chiu Chuang , patch 10 to address all comments.
          Hide
          jojochuang Wei-Chiu Chuang added a comment -

          Did a final review before I sign off –

          Found a few very minor log messages that worth improvement:

          KeyProviderCryptoExtension#reencryptEncryptedKeys
          "encryptedKey version name must be '%s', is '%s'"
          ...
          "All keys must be with same key name. found '%s', '%s'"
          

          can be updated with

          "encryptedKey version name must be '%s', but found '%s'"
          and
          "All keys must have the same key name. Expected '%s' but found '%s'"
          

          (KMSClientProvider#reencryptEncryptedKeys has the same Preconditions check that can also be updated)

          Would it make sense to move the following encryptor/decryptor initilalization

          	          if (decryptor == null) {
          	            decryptor = cc.createDecryptor();
          	          }
          	          if (encryptor == null) {
          	            encryptor = cc.createEncryptor();
          	          }
          

          to right after

          try (CryptoCodec cc = CryptoCodec.getInstance(keyProvider.getConf())) {
          

          ? (i.e. before the while loop)

          Thanks for the work. I am +1 after these nits are addressed.

          Show
          jojochuang Wei-Chiu Chuang added a comment - Did a final review before I sign off – Found a few very minor log messages that worth improvement: KeyProviderCryptoExtension#reencryptEncryptedKeys "encryptedKey version name must be '%s', is '%s'" ... "All keys must be with same key name. found '%s', '%s'" can be updated with "encryptedKey version name must be '%s', but found '%s'" and "All keys must have the same key name. Expected '%s' but found '%s'" (KMSClientProvider#reencryptEncryptedKeys has the same Preconditions check that can also be updated) Would it make sense to move the following encryptor/decryptor initilalization if (decryptor == null ) { decryptor = cc.createDecryptor(); } if (encryptor == null ) { encryptor = cc.createEncryptor(); } to right after try (CryptoCodec cc = CryptoCodec.getInstance(keyProvider.getConf())) { ? (i.e. before the while loop) Thanks for the work. I am +1 after these nits are addressed.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 24s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
                trunk Compile Tests
          0 mvndep 0m 11s Maven dependency ordering for branch
          +1 mvninstall 14m 52s trunk passed
          +1 compile 15m 6s trunk passed
          +1 checkstyle 0m 40s trunk passed
          +1 mvnsite 2m 23s trunk passed
          +1 findbugs 2m 0s trunk passed
          +1 javadoc 1m 10s trunk passed
                Patch Compile Tests
          0 mvndep 0m 8s Maven dependency ordering for patch
          +1 mvninstall 0m 59s the patch passed
          +1 compile 11m 38s the patch passed
          +1 javac 11m 38s the patch passed
          +1 checkstyle 0m 42s hadoop-common-project: The patch generated 0 new + 150 unchanged - 2 fixed = 150 total (was 152)
          +1 mvnsite 2m 18s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 2m 20s the patch passed
          +1 javadoc 1m 12s the patch passed
                Other Tests
          -1 unit 8m 29s hadoop-common in the patch failed.
          +1 unit 3m 5s hadoop-kms in the patch passed.
          +1 asflicense 0m 30s The patch does not generate ASF License warnings.
          72m 20s



          Reason Tests
          Failed junit tests hadoop.security.TestKDiag



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HADOOP-14705
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882657/HADOOP-14705.09.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 16f70ceff76f 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 2d105a2
          Default Java 1.8.0_144
          findbugs v3.1.0-RC1
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13071/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13071/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13071/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 24s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.       trunk Compile Tests 0 mvndep 0m 11s Maven dependency ordering for branch +1 mvninstall 14m 52s trunk passed +1 compile 15m 6s trunk passed +1 checkstyle 0m 40s trunk passed +1 mvnsite 2m 23s trunk passed +1 findbugs 2m 0s trunk passed +1 javadoc 1m 10s trunk passed       Patch Compile Tests 0 mvndep 0m 8s Maven dependency ordering for patch +1 mvninstall 0m 59s the patch passed +1 compile 11m 38s the patch passed +1 javac 11m 38s the patch passed +1 checkstyle 0m 42s hadoop-common-project: The patch generated 0 new + 150 unchanged - 2 fixed = 150 total (was 152) +1 mvnsite 2m 18s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 20s the patch passed +1 javadoc 1m 12s the patch passed       Other Tests -1 unit 8m 29s hadoop-common in the patch failed. +1 unit 3m 5s hadoop-kms in the patch passed. +1 asflicense 0m 30s The patch does not generate ASF License warnings. 72m 20s Reason Tests Failed junit tests hadoop.security.TestKDiag Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HADOOP-14705 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882657/HADOOP-14705.09.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 16f70ceff76f 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 2d105a2 Default Java 1.8.0_144 findbugs v3.1.0-RC1 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13071/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13071/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13071/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xiaochen Xiao Chen added a comment -

          Thanks again for the review Rushabh S Shah. Patch 9 attached.

          If we remove the try catch from doAs context

          I see your point now, addressed!

          remove unit tests...

          Guess the test may have made sense in v0 when things could go terribly wrong - but I'll take your point about new developers, so removed the comparison when fruit kinds are different.

          Show
          xiaochen Xiao Chen added a comment - Thanks again for the review Rushabh S Shah . Patch 9 attached. If we remove the try catch from doAs context I see your point now, addressed! remove unit tests... Guess the test may have made sense in v0 when things could go terribly wrong - but I'll take your point about new developers, so removed the comparison when fruit kinds are different.
          Hide
          xiaochen Xiao Chen added a comment - - edited

          Thanks Wei-Chiu Chuang for the careful reviews.

          warnings

          I tried your suggestions (just #1, add <String, String> and remove the SupressWarning).
          Recompiling, I see (and do not see these with patch 8):

          [WARNING] /Users/xiao/Desktop/repo/hdfs/xiao/hadoop/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java: /Users/xiao/Desktop/repo/hdfs/xiao/hadoop/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java uses unchecked or unsafe operations.
          [WARNING] /Users/xiao/Desktop/repo/hdfs/xiao/hadoop/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java: Recompile with -Xlint:unchecked for details.
          [WARNING] /Users/xiao/Desktop/repo/hdfs/xiao/hadoop/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java: /Users/xiao/Desktop/repo/hdfs/xiao/hadoop/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java uses unchecked or unsafe operations.
          [WARNING] /Users/xiao/Desktop/repo/hdfs/xiao/hadoop/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java: Recompile with -Xlint:unchecked for details.
          

          I agree in general fixing warning is better than suppressing. But here it feels to me an overkill to wrack a mole on all existing usages to remove the annotation. Also, this patch only tries to move the helper methods to a utility class - the annotations exists even before.

          CryptoCodec...

          Good question and discussions.
          Since that impacts generate only, and it's not being changed here, shall we continue on HADOOP-14780? Pasted above comments on that one.

          Show
          xiaochen Xiao Chen added a comment - - edited Thanks Wei-Chiu Chuang for the careful reviews. warnings I tried your suggestions (just #1, add <String, String> and remove the SupressWarning). Recompiling, I see (and do not see these with patch 8): [WARNING] /Users/xiao/Desktop/repo/hdfs/xiao/hadoop/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java: /Users/xiao/Desktop/repo/hdfs/xiao/hadoop/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java uses unchecked or unsafe operations. [WARNING] /Users/xiao/Desktop/repo/hdfs/xiao/hadoop/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java: Recompile with -Xlint:unchecked for details. [WARNING] /Users/xiao/Desktop/repo/hdfs/xiao/hadoop/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java: /Users/xiao/Desktop/repo/hdfs/xiao/hadoop/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java uses unchecked or unsafe operations. [WARNING] /Users/xiao/Desktop/repo/hdfs/xiao/hadoop/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java: Recompile with -Xlint:unchecked for details. I agree in general fixing warning is better than suppressing. But here it feels to me an overkill to wrack a mole on all existing usages to remove the annotation. Also, this patch only tries to move the helper methods to a utility class - the annotations exists even before. CryptoCodec... Good question and discussions. Since that impacts generate only, and it's not being changed here, shall we continue on HADOOP-14780 ? Pasted above comments on that one.
          Hide
          shahrs87 Rushabh S Shah added a comment -

          From my understanding, the goal for these is purely for maintenance and statistics. Since key level operations are rare, they're aggregated to the same meters - either admin.calls or key.calls.

          I will take your word for this. You are more closer to developers who developed this feature compared to me.

          Problem with that is, the precondition checks also become IOEs.
          The outer wrapper is only to make it possible to log a debug message in the KMS if things go wrong, the inner exception seems to consider provider-thrown exceptions more serious and log an error.... I don't fully get the history for this, so didn't change.

          This patch is converting all the exceptions from doAs into IOException.
          If we remove the try catch from doAs context, then all the exceptions that originate from Preconditions will be re-thrown as the same exception.
          Relevant piece of code is below.

              } catch (Exception e) {
                LOG.debug("Exception in reencryptEncryptedKeys.", e);
                throw e;
              } finally {
                LOG.trace("Exiting handleEncryptedKeyOp method.");
              }
          

          I think the current way is more consistent with other methods in KMS and creates least surprise.

          Only KMS#generateEncryptedKeys is following this practice i.e. surrounding try catch around user.doAs.

          Looking at testGenerateEncryptedKey this case is also there, and I think it doesn't hurt to make sure they're different fruits.

          I would rather remove this comparison.
          The new developers who will work on this code in future will also follow this practice and that is not desirable.
          If I were you then I will remove that test case from testGenerateEncryptedKey also but I would leave it upto you.

          Show
          shahrs87 Rushabh S Shah added a comment - From my understanding, the goal for these is purely for maintenance and statistics. Since key level operations are rare, they're aggregated to the same meters - either admin.calls or key.calls. I will take your word for this. You are more closer to developers who developed this feature compared to me. Problem with that is, the precondition checks also become IOEs. The outer wrapper is only to make it possible to log a debug message in the KMS if things go wrong, the inner exception seems to consider provider-thrown exceptions more serious and log an error.... I don't fully get the history for this, so didn't change. This patch is converting all the exceptions from doAs into IOException . If we remove the try catch from doAs context, then all the exceptions that originate from Preconditions will be re-thrown as the same exception. Relevant piece of code is below. } catch (Exception e) { LOG.debug("Exception in reencryptEncryptedKeys.", e); throw e; } finally { LOG.trace("Exiting handleEncryptedKeyOp method."); } I think the current way is more consistent with other methods in KMS and creates least surprise. Only KMS#generateEncryptedKeys is following this practice i.e. surrounding try catch around user.doAs . Looking at testGenerateEncryptedKey this case is also there, and I think it doesn't hurt to make sure they're different fruits. I would rather remove this comparison. The new developers who will work on this code in future will also follow this practice and that is not desirable. If I were you then I will remove that test case from testGenerateEncryptedKey also but I would leave it upto you.
          Hide
          shahrs87 Rushabh S Shah added a comment - - edited

          If it is reused, and if CryptoCodec uses SecureRandom, the random numbers generated in DefaultCryptoExtension#generateEncryptedKey may become predictable.

          Thats a good question and a valid security concern.
          As far as DefaultCryptoExtension#generateEncryptedKey is concerned, here is the workflow to generate secureRandom bytes.
          DefaultCryptoExtension#generateEncryptedKey(String encryptionKeyName)} --> {{OpensslAesCtrCryptoCodec#generateSecureRandom(byte[] bytes) --> OsSecureRandom#nextBytes(byte[] bytes) --> OsSecureRandom.fillReservoir(int min).
          All OsSecureRandom#fillReservoir(int min) does is read random bytes from /dev/urandom.

          OsSecureRandom.java
           private void fillReservoir(int min) {
              if (pos >= reservoir.length - min) {
                try {
                  if (stream == null) {
                    stream = new FileInputStream(new File(randomDevPath)); // randomDevPath = /dev/urandom
                  }
                  IOUtils.readFully(stream, reservoir, 0, reservoir.length);
                } catch (IOException e) {
                  throw new RuntimeException("failed to fill reservoir", e);
                }
                pos = 0;
              }
            }
          

          The whole workflow assumes we take default values from hadoop.security.secure.random.impl, hadoop.security.random.device.file.path and hadoop.security.crypto.cipher.suite

          Show
          shahrs87 Rushabh S Shah added a comment - - edited If it is reused, and if CryptoCodec uses SecureRandom, the random numbers generated in DefaultCryptoExtension#generateEncryptedKey may become predictable. Thats a good question and a valid security concern. As far as DefaultCryptoExtension#generateEncryptedKey is concerned, here is the workflow to generate secureRandom bytes. DefaultCryptoExtension#generateEncryptedKey(String encryptionKeyName)} --> {{OpensslAesCtrCryptoCodec#generateSecureRandom(byte[] bytes) --> OsSecureRandom#nextBytes(byte[] bytes) --> OsSecureRandom.fillReservoir(int min) . All OsSecureRandom#fillReservoir(int min) does is read random bytes from /dev/urandom . OsSecureRandom.java private void fillReservoir( int min) { if (pos >= reservoir.length - min) { try { if (stream == null ) { stream = new FileInputStream( new File(randomDevPath)); // randomDevPath = /dev/urandom } IOUtils.readFully(stream, reservoir, 0, reservoir.length); } catch (IOException e) { throw new RuntimeException( "failed to fill reservoir" , e); } pos = 0; } } The whole workflow assumes we take default values from hadoop.security.secure.random.impl , hadoop.security.random.device.file.path and hadoop.security.crypto.cipher.suite
          Hide
          jojochuang Wei-Chiu Chuang added a comment -

          KMSUtil:

          The following method does not need to suppress warning if the json object is declared with generics type.

          public static Map toJSON(KeyProvider.KeyVersion keyVersion) {
              Map<String,String> json = new HashMap<String,String>();
              if (keyVersion != null) {
                json.put(KMSRESTConstants.NAME_FIELD,
                    keyVersion.getName());
                json.put(KMSRESTConstants.VERSION_NAME_FIELD,
                    keyVersion.getVersionName());
                json.put(KMSRESTConstants.MATERIAL_FIELD,
                    Base64.encodeBase64URLSafeString(
                        keyVersion.getMaterial()));
              }
              return json;
            }
          

          Without any additional change, these methods do not need to suppress warning

          public static Map toJSON(EncryptedKeyVersion encryptedKeyVersion) {
              Map json = new HashMap();
              if (encryptedKeyVersion != null) {
                json.put(KMSRESTConstants.VERSION_NAME_FIELD,
                    encryptedKeyVersion.getEncryptionKeyVersionName());
                json.put(KMSRESTConstants.IV_FIELD, Base64
                    .encodeBase64URLSafeString(encryptedKeyVersion.getEncryptedKeyIv()));
                json.put(KMSRESTConstants.ENCRYPTED_KEY_VERSION_FIELD,
                    toJSON(encryptedKeyVersion.getEncryptedKeyVersion()));
              }
              return json;
            }
          
          public static List<EncryptedKeyVersion>
                parseJSONEncKeyVersions(String keyName, List valueList) {
              checkNotNull(valueList, "valueList");
              List<EncryptedKeyVersion> ekvs = new ArrayList<>(valueList.size());
              if (!valueList.isEmpty()) {
                for (Object values : valueList) {
                  Map valueMap = (Map) values;
                  ekvs.add(parseJSONEncKeyVersion(keyName, valueMap));
                }
              }
              return ekvs;
            }
          
            public static EncryptedKeyVersion parseJSONEncKeyVersion(String keyName,
                Map valueMap) {
              checkNotNull(valueMap, "valueMap");
              String versionName = checkNotNull(
                  (String) valueMap.get(KMSRESTConstants.VERSION_NAME_FIELD),
                  KMSRESTConstants.VERSION_NAME_FIELD);
          
              byte[] iv = Base64.decodeBase64(checkNotNull(
                  (String) valueMap.get(KMSRESTConstants.IV_FIELD),
                  KMSRESTConstants.IV_FIELD));
          
              Map encValueMap = checkNotNull((Map)
                      valueMap.get(KMSRESTConstants.ENCRYPTED_KEY_VERSION_FIELD),
                  KMSRESTConstants.ENCRYPTED_KEY_VERSION_FIELD);
          
              String encVersionName = checkNotNull((String)
                      encValueMap.get(KMSRESTConstants.VERSION_NAME_FIELD),
                  KMSRESTConstants.VERSION_NAME_FIELD);
          
              byte[] encKeyMaterial = Base64.decodeBase64(checkNotNull((String)
                      encValueMap.get(KMSRESTConstants.MATERIAL_FIELD),
                  KMSRESTConstants.MATERIAL_FIELD));
          
              return new KMSClientProvider.KMSEncryptedKeyVersion(keyName, versionName,
                  iv, encVersionName, encKeyMaterial);
            }
          
            public static KeyProvider.KeyVersion parseJSONKeyVersion(Map valueMap) {
              checkNotNull(valueMap, "valueMap");
              KeyProvider.KeyVersion keyVersion = null;
              if (!valueMap.isEmpty()) {
                byte[] material =
                    (valueMap.containsKey(KMSRESTConstants.MATERIAL_FIELD)) ?
                        Base64.decodeBase64(
                            (String) valueMap.get(KMSRESTConstants.MATERIAL_FIELD)) :
                        null;
                String versionName =
                    (String) valueMap.get(KMSRESTConstants.VERSION_NAME_FIELD);
                String keyName = (String) valueMap.get(KMSRESTConstants.NAME_FIELD);
                keyVersion =
                    new KMSClientProvider.KMSKeyVersion(keyName, versionName, material);
              }
              return keyVersion;
            }
          

          On another jira, Rushabh S Shah mentioned a possibility to reuse CryptoCodec (making it a member variable of DefaultCryptoExtension). Is there any security concerns? If it is reused, and if CryptoCodec uses SecureRandom, the random numbers generated in DefaultCryptoExtension#generateEncryptedKey may become predictable. I don't come with a security background, but I am thinking it could be exploitable.

          Show
          jojochuang Wei-Chiu Chuang added a comment - KMSUtil: The following method does not need to suppress warning if the json object is declared with generics type. public static Map toJSON(KeyProvider.KeyVersion keyVersion) { Map< String , String > json = new HashMap< String , String >(); if (keyVersion != null ) { json.put(KMSRESTConstants.NAME_FIELD, keyVersion.getName()); json.put(KMSRESTConstants.VERSION_NAME_FIELD, keyVersion.getVersionName()); json.put(KMSRESTConstants.MATERIAL_FIELD, Base64.encodeBase64URLSafeString( keyVersion.getMaterial())); } return json; } Without any additional change, these methods do not need to suppress warning public static Map toJSON(EncryptedKeyVersion encryptedKeyVersion) { Map json = new HashMap(); if (encryptedKeyVersion != null ) { json.put(KMSRESTConstants.VERSION_NAME_FIELD, encryptedKeyVersion.getEncryptionKeyVersionName()); json.put(KMSRESTConstants.IV_FIELD, Base64 .encodeBase64URLSafeString(encryptedKeyVersion.getEncryptedKeyIv())); json.put(KMSRESTConstants.ENCRYPTED_KEY_VERSION_FIELD, toJSON(encryptedKeyVersion.getEncryptedKeyVersion())); } return json; } public static List<EncryptedKeyVersion> parseJSONEncKeyVersions( String keyName, List valueList) { checkNotNull(valueList, "valueList" ); List<EncryptedKeyVersion> ekvs = new ArrayList<>(valueList.size()); if (!valueList.isEmpty()) { for ( Object values : valueList) { Map valueMap = (Map) values; ekvs.add(parseJSONEncKeyVersion(keyName, valueMap)); } } return ekvs; } public static EncryptedKeyVersion parseJSONEncKeyVersion( String keyName, Map valueMap) { checkNotNull(valueMap, "valueMap" ); String versionName = checkNotNull( ( String ) valueMap.get(KMSRESTConstants.VERSION_NAME_FIELD), KMSRESTConstants.VERSION_NAME_FIELD); byte [] iv = Base64.decodeBase64(checkNotNull( ( String ) valueMap.get(KMSRESTConstants.IV_FIELD), KMSRESTConstants.IV_FIELD)); Map encValueMap = checkNotNull((Map) valueMap.get(KMSRESTConstants.ENCRYPTED_KEY_VERSION_FIELD), KMSRESTConstants.ENCRYPTED_KEY_VERSION_FIELD); String encVersionName = checkNotNull(( String ) encValueMap.get(KMSRESTConstants.VERSION_NAME_FIELD), KMSRESTConstants.VERSION_NAME_FIELD); byte [] encKeyMaterial = Base64.decodeBase64(checkNotNull(( String ) encValueMap.get(KMSRESTConstants.MATERIAL_FIELD), KMSRESTConstants.MATERIAL_FIELD)); return new KMSClientProvider.KMSEncryptedKeyVersion(keyName, versionName, iv, encVersionName, encKeyMaterial); } public static KeyProvider.KeyVersion parseJSONKeyVersion(Map valueMap) { checkNotNull(valueMap, "valueMap" ); KeyProvider.KeyVersion keyVersion = null ; if (!valueMap.isEmpty()) { byte [] material = (valueMap.containsKey(KMSRESTConstants.MATERIAL_FIELD)) ? Base64.decodeBase64( ( String ) valueMap.get(KMSRESTConstants.MATERIAL_FIELD)) : null ; String versionName = ( String ) valueMap.get(KMSRESTConstants.VERSION_NAME_FIELD); String keyName = ( String ) valueMap.get(KMSRESTConstants.NAME_FIELD); keyVersion = new KMSClientProvider.KMSKeyVersion(keyName, versionName, material); } return keyVersion; } On another jira, Rushabh S Shah mentioned a possibility to reuse CryptoCodec (making it a member variable of DefaultCryptoExtension). Is there any security concerns? If it is reused, and if CryptoCodec uses SecureRandom , the random numbers generated in DefaultCryptoExtension#generateEncryptedKey may become predictable. I don't come with a security background, but I am thinking it could be exploitable.
          Hide
          xiaochen Xiao Chen added a comment -

          Test failures are not related to the changes here. KDiag failure is tracked at HADOOP-14030.

          Show
          xiaochen Xiao Chen added a comment - Test failures are not related to the changes here. KDiag failure is tracked at HADOOP-14030 .
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 19s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
                trunk Compile Tests
          0 mvndep 0m 19s Maven dependency ordering for branch
          +1 mvninstall 17m 22s trunk passed
          +1 compile 14m 14s trunk passed
          +1 checkstyle 0m 42s trunk passed
          +1 mvnsite 2m 18s trunk passed
          +1 findbugs 1m 59s trunk passed
          +1 javadoc 1m 11s trunk passed
                Patch Compile Tests
          0 mvndep 0m 7s Maven dependency ordering for patch
          +1 mvninstall 0m 58s the patch passed
          +1 compile 10m 45s the patch passed
          +1 javac 10m 45s the patch passed
          +1 checkstyle 0m 40s hadoop-common-project: The patch generated 0 new + 150 unchanged - 2 fixed = 150 total (was 152)
          +1 mvnsite 2m 15s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 2m 10s the patch passed
          +1 javadoc 1m 10s the patch passed
                Other Tests
          -1 unit 9m 6s hadoop-common in the patch failed.
          +1 unit 3m 2s hadoop-kms in the patch passed.
          +1 asflicense 0m 30s The patch does not generate ASF License warnings.
          73m 26s



          Reason Tests
          Failed junit tests hadoop.ipc.TestRPC
            hadoop.security.TestKDiag



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HADOOP-14705
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882456/HADOOP-14705.08.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux e4892f3e708e 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / b298948
          Default Java 1.8.0_144
          findbugs v3.1.0-RC1
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13064/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13064/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13064/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 19s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.       trunk Compile Tests 0 mvndep 0m 19s Maven dependency ordering for branch +1 mvninstall 17m 22s trunk passed +1 compile 14m 14s trunk passed +1 checkstyle 0m 42s trunk passed +1 mvnsite 2m 18s trunk passed +1 findbugs 1m 59s trunk passed +1 javadoc 1m 11s trunk passed       Patch Compile Tests 0 mvndep 0m 7s Maven dependency ordering for patch +1 mvninstall 0m 58s the patch passed +1 compile 10m 45s the patch passed +1 javac 10m 45s the patch passed +1 checkstyle 0m 40s hadoop-common-project: The patch generated 0 new + 150 unchanged - 2 fixed = 150 total (was 152) +1 mvnsite 2m 15s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 10s the patch passed +1 javadoc 1m 10s the patch passed       Other Tests -1 unit 9m 6s hadoop-common in the patch failed. +1 unit 3m 2s hadoop-kms in the patch passed. +1 asflicense 0m 30s The patch does not generate ASF License warnings. 73m 26s Reason Tests Failed junit tests hadoop.ipc.TestRPC   hadoop.security.TestKDiag Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HADOOP-14705 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882456/HADOOP-14705.08.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux e4892f3e708e 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / b298948 Default Java 1.8.0_144 findbugs v3.1.0-RC1 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13064/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13064/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13064/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xiaochen Xiao Chen added a comment -

          Attaching patch 8 to address all comments from Wei-Chiu. Thanks for reviewing.

          After adding this interface, does it deprecate the old reencrypt interface added in HADOOP-13827?

          I don't feel strongly either way, but if we do, let's do it in a separate jira within beta1 timeframe.

          Show
          xiaochen Xiao Chen added a comment - Attaching patch 8 to address all comments from Wei-Chiu. Thanks for reviewing. After adding this interface, does it deprecate the old reencrypt interface added in HADOOP-13827 ? I don't feel strongly either way, but if we do, let's do it in a separate jira within beta1 timeframe.
          Hide
          hadoopqa Hadoop QA added a comment -
          +1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 19s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
                trunk Compile Tests
          0 mvndep 0m 9s Maven dependency ordering for branch
          +1 mvninstall 14m 21s trunk passed
          +1 compile 14m 48s trunk passed
          +1 checkstyle 0m 41s trunk passed
          +1 mvnsite 2m 19s trunk passed
          +1 findbugs 1m 57s trunk passed
          +1 javadoc 1m 12s trunk passed
                Patch Compile Tests
          0 mvndep 0m 7s Maven dependency ordering for patch
          +1 mvninstall 0m 58s the patch passed
          +1 compile 11m 11s the patch passed
          +1 javac 11m 11s the patch passed
          -0 checkstyle 0m 40s hadoop-common-project: The patch generated 3 new + 152 unchanged - 2 fixed = 155 total (was 154)
          +1 mvnsite 2m 21s the patch passed
          +1 whitespace 0m 1s The patch has no whitespace issues.
          +1 findbugs 2m 16s the patch passed
          +1 javadoc 1m 9s the patch passed
                Other Tests
          +1 unit 8m 7s hadoop-common in the patch passed.
          +1 unit 3m 3s hadoop-kms in the patch passed.
          +1 asflicense 0m 30s The patch does not generate ASF License warnings.
          70m 23s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HADOOP-14705
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882438/HADOOP-14705.07.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 3f04da0de762 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / ab1a8ae
          Default Java 1.8.0_144
          findbugs v3.1.0-RC1
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13063/artifact/patchprocess/diff-checkstyle-hadoop-common-project.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13063/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13063/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 19s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.       trunk Compile Tests 0 mvndep 0m 9s Maven dependency ordering for branch +1 mvninstall 14m 21s trunk passed +1 compile 14m 48s trunk passed +1 checkstyle 0m 41s trunk passed +1 mvnsite 2m 19s trunk passed +1 findbugs 1m 57s trunk passed +1 javadoc 1m 12s trunk passed       Patch Compile Tests 0 mvndep 0m 7s Maven dependency ordering for patch +1 mvninstall 0m 58s the patch passed +1 compile 11m 11s the patch passed +1 javac 11m 11s the patch passed -0 checkstyle 0m 40s hadoop-common-project: The patch generated 3 new + 152 unchanged - 2 fixed = 155 total (was 154) +1 mvnsite 2m 21s the patch passed +1 whitespace 0m 1s The patch has no whitespace issues. +1 findbugs 2m 16s the patch passed +1 javadoc 1m 9s the patch passed       Other Tests +1 unit 8m 7s hadoop-common in the patch passed. +1 unit 3m 3s hadoop-kms in the patch passed. +1 asflicense 0m 30s The patch does not generate ASF License warnings. 70m 23s Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HADOOP-14705 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882438/HADOOP-14705.07.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 3f04da0de762 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / ab1a8ae Default Java 1.8.0_144 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13063/artifact/patchprocess/diff-checkstyle-hadoop-common-project.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13063/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13063/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          jojochuang Wei-Chiu Chuang added a comment - - edited

          Thanks for the new patch, Xiao.

          Looking at rev 007, I feel like adding an artificial, hard-coded limit of size of payload is not the best approach.

          EagerKeyGeneratorKeyProviderCryptoExtension#reencryptEncryptedKeys
          Preconditions.checkArgument(jsonPayload.size() <= MAX_NUM_PER_BATCH,
          	          "jsonPayload too many objects");
          

          I would actually prefer to log a warning if the size exceed a certain limit, than rejecting it right away.

          After adding this interface, does it deprecate the old reencrypt interface added in HADOOP-13827?

          Regarding doc: it might be useful to mention the batch reencryption interface only supports EEKs in the same encryption zone (or has the same EK)

          Show
          jojochuang Wei-Chiu Chuang added a comment - - edited Thanks for the new patch, Xiao. Looking at rev 007, I feel like adding an artificial, hard-coded limit of size of payload is not the best approach. EagerKeyGeneratorKeyProviderCryptoExtension#reencryptEncryptedKeys Preconditions.checkArgument(jsonPayload.size() <= MAX_NUM_PER_BATCH, "jsonPayload too many objects" ); I would actually prefer to log a warning if the size exceed a certain limit, than rejecting it right away. After adding this interface, does it deprecate the old reencrypt interface added in HADOOP-13827 ? Regarding doc: it might be useful to mention the batch reencryption interface only supports EEKs in the same encryption zone (or has the same EK)
          Hide
          xiaochen Xiao Chen added a comment -

          Thanks a lot for the prompt review Rushabh S Shah, and apologies for the difficulty in applying the patch.

          All comments addressed in patch 7 with exception and explanations below.

          Do we need to increment AdminCallsMeter ?

          I think it's the other way. These meters are exposed to the jmx, where you would see things like:

              "name" : "metrics:name=hadoop.kms.unauthorized.calls.meter",
              "name" : "metrics:name=hadoop.kms.invalid.calls.meter",
              "name" : "metrics:name=hadoop.kms.decrypt_eek.calls.meter",
              "name" : "metrics:name=hadoop.kms.admin.calls.meter",
              "name" : "metrics:name=hadoop.kms.generate_eek.calls.meter",
              "name" : "metrics:name=hadoop.kms.key.calls.meter",
              "name" : "metrics:name=hadoop.kms.unauthenticated.calls.meter",
          

          From my understanding, the goal for these is purely for maintenance and statistics. Since key level operations are rare, they're aggregated to the same meters - either admin.calls or key.calls.
          For eek calls (generate/decrypt), they both have their own meters. reencrypt fits into this category.

          We can remove the try catch block around user.doAs context and let the outer try catch block handle the exception propagating from the doAs call.

          Problem with that is, the precondition checks also become IOEs.
          I think the current way is more consistent with other methods in KMS and creates least surprise.
          The outer wrapper is only to make it possible to log a debug message in the KMS if things go wrong, the inner exception seems to consider provider-thrown exceptions more serious and log an error.... I don't fully get the history for this, so didn't change.

          ... in the test ... I think we are comparing apples to oranges.

          Agreed. Looking at testGenerateEncryptedKey this case is also there, and I think it doesn't hurt to make sure they're different fruits. Also added your suggestion #5 which does the more important comparison of apples to apples.

          Show
          xiaochen Xiao Chen added a comment - Thanks a lot for the prompt review Rushabh S Shah , and apologies for the difficulty in applying the patch. All comments addressed in patch 7 with exception and explanations below. Do we need to increment AdminCallsMeter ? I think it's the other way. These meters are exposed to the jmx, where you would see things like: "name" : "metrics:name=hadoop.kms.unauthorized.calls.meter", "name" : "metrics:name=hadoop.kms.invalid.calls.meter", "name" : "metrics:name=hadoop.kms.decrypt_eek.calls.meter", "name" : "metrics:name=hadoop.kms.admin.calls.meter", "name" : "metrics:name=hadoop.kms.generate_eek.calls.meter", "name" : "metrics:name=hadoop.kms.key.calls.meter", "name" : "metrics:name=hadoop.kms.unauthenticated.calls.meter", From my understanding, the goal for these is purely for maintenance and statistics. Since key level operations are rare, they're aggregated to the same meters - either admin.calls or key.calls. For eek calls (generate/decrypt), they both have their own meters. reencrypt fits into this category. We can remove the try catch block around user.doAs context and let the outer try catch block handle the exception propagating from the doAs call. Problem with that is, the precondition checks also become IOEs. I think the current way is more consistent with other methods in KMS and creates least surprise. The outer wrapper is only to make it possible to log a debug message in the KMS if things go wrong, the inner exception seems to consider provider-thrown exceptions more serious and log an error.... I don't fully get the history for this, so didn't change. ... in the test ... I think we are comparing apples to oranges. Agreed. Looking at testGenerateEncryptedKey this case is also there, and I think it doesn't hurt to make sure they're different fruits. Also added your suggestion #5 which does the more important comparison of apples to apples.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 20s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
                trunk Compile Tests
          0 mvndep 0m 9s Maven dependency ordering for branch
          +1 mvninstall 16m 6s trunk passed
          +1 compile 16m 20s trunk passed
          +1 checkstyle 0m 41s trunk passed
          +1 mvnsite 2m 23s trunk passed
          +1 findbugs 2m 5s trunk passed
          +1 javadoc 1m 20s trunk passed
                Patch Compile Tests
          0 mvndep 0m 8s Maven dependency ordering for patch
          +1 mvninstall 1m 1s the patch passed
          +1 compile 13m 4s the patch passed
          +1 javac 13m 4s the patch passed
          +1 checkstyle 0m 42s hadoop-common-project: The patch generated 0 new + 150 unchanged - 2 fixed = 150 total (was 152)
          +1 mvnsite 2m 17s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 2m 17s the patch passed
          +1 javadoc 1m 13s the patch passed
                Other Tests
          -1 unit 7m 43s hadoop-common in the patch failed.
          +1 unit 3m 8s hadoop-kms in the patch passed.
          +1 asflicense 0m 29s The patch does not generate ASF License warnings.
          75m 42s



          Reason Tests
          Failed junit tests hadoop.ipc.TestRPC
            hadoop.net.TestDNS



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HADOOP-14705
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882403/HADOOP-14705.06.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux d42b00258d5a 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / dd7916d
          Default Java 1.8.0_144
          findbugs v3.1.0-RC1
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13062/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13062/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13062/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 20s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.       trunk Compile Tests 0 mvndep 0m 9s Maven dependency ordering for branch +1 mvninstall 16m 6s trunk passed +1 compile 16m 20s trunk passed +1 checkstyle 0m 41s trunk passed +1 mvnsite 2m 23s trunk passed +1 findbugs 2m 5s trunk passed +1 javadoc 1m 20s trunk passed       Patch Compile Tests 0 mvndep 0m 8s Maven dependency ordering for patch +1 mvninstall 1m 1s the patch passed +1 compile 13m 4s the patch passed +1 javac 13m 4s the patch passed +1 checkstyle 0m 42s hadoop-common-project: The patch generated 0 new + 150 unchanged - 2 fixed = 150 total (was 152) +1 mvnsite 2m 17s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 17s the patch passed +1 javadoc 1m 13s the patch passed       Other Tests -1 unit 7m 43s hadoop-common in the patch failed. +1 unit 3m 8s hadoop-kms in the patch passed. +1 asflicense 0m 29s The patch does not generate ASF License warnings. 75m 42s Reason Tests Failed junit tests hadoop.ipc.TestRPC   hadoop.net.TestDNS Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HADOOP-14705 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882403/HADOOP-14705.06.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux d42b00258d5a 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / dd7916d Default Java 1.8.0_144 findbugs v3.1.0-RC1 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13062/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13062/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13062/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          shahrs87 Rushabh S Shah added a comment -

          Reviewed patch#4 since patch#5 didn't apply cleanly when I started reviewing.

          KMS#reencryptEncryptedKeys()

          1. Do we need to increment AdminCallsMeter ?
            Its not clear from the class which calls are supposed to be admin and which ones are non-admin.
            Since we are planning to create a new thread in namenode for batch re-encrypting, I think it would be admin call.
            Please comment otherwise.
          2. We can remove the try catch block around user.doAs context and let the outer try catch block handle the exception propagating from the doAs call.

          KeyAuthorizationKeyProvider#reencryptEncryptedKeys(List<EncryptedKeyVersion> ekvs)

          1. The following chunk of code is redundant.
               if (keyName == null) {
                      keyName = ekv.getEncryptionKeyName();
                    } else {
                      if (!keyName.equals(ekv.getEncryptionKeyName())) {
                        throw new IllegalArgumentException(String.format(
                            "multiple keyname '%s' '%s' found for reencryptEncryptedKeys",
                            keyName, ekv.getEncryptionKeyName()));
                      }
            

            We already do this check in KMS#reencryptEncryptedKeys

          KMSUtil.java

          • parseJSONEncKeyVersions
            • Why are we using LinkedList ?
              We are just traversing the list using listIerator() and caling Iterator#set().
              set() operation is O(1) in both cases.
              The memory footprint is more in LinkedList compared to ArrayList.
          • toJSON(KeyProvider.KeyVersion keyVersion) and toJSON(EncryptedKeyVersion encryptedKeyVersion) are still using LinkedHashMap.
            We can make it HashMap.

          TestKeyProviderCryptoExtension#testReencryptEncryptedKeys()

          1. Minor nit:
             assertEquals("Version name of EEK should be EEK",
                      KeyProviderCryptoExtension.EEK,
                      ekv.getEncryptedKeyVersion().getVersionName());
            

            The error message should be "Version name should be EEK"

          2. Another minor nit.
            assertEquals("Name of EEK should be encryption key name",
            Something doesn't seems right with this error message.
            According to me, it should be EEK's key name should be fookey.
          3. Following code.
             
             if (Arrays.equals(ekv.getEncryptedKeyVersion().getMaterial(),
                      encryptionKey.getMaterial())) {
                    fail("Encrypted key material should not equal decrypted key material");
                  }
            

            Instead you can use

             
            Assert.assertNotEquals("Encrypted key material should not equal decrypted key material", new String(ekv.getEncryptedKeyVersion().getMaterial(),new String(encryptionKey.getMaterial()))
            
          4. Following code.
             	  
            encryptionKey = kp.createKey(ENCRYPTION_KEY_NAME, SecureRandom.getSeed(16), options);
            final KeyVersion kv = kpExt.decryptEncryptedKey(ekv);
            // Following lines are in patch.	  
            if (Arrays.equals(kv.getMaterial(), encryptionKey.getMaterial())) {
                    fail("Encrypted key material should not equal encryption key material");
            }
            

            This comparison didn't made sense to me.
            This is comparing the secret that is stored in the backend with the material that is being generated while calling decryptEncryptedKey()
            These both are bound to be different.
            I think we are comparing apples to oranges.

          5. I would like to see the following test case.
            The material returned after decrypting original ekv and decrypting new ekv should be same.
            	 	  
                  for (int i = 0; i < ekvs.size(); ++i) {
                     final EncryptedKeyVersion ekv = ekvs.get(i);
                     final EncryptedKeyVersion orig = ekvsOrig.get(i);
                     KeyVersion decryptedEkv = kpExt.decryptEncryptedKey(ekv);
            	 KeyVersion origEkv = kpExt.decryptEncryptedKey(orig);
            	Assert.equals(decryptedEkv.getMaterial(), origEkv.getMaerial());
                }
            
          Show
          shahrs87 Rushabh S Shah added a comment - Reviewed patch#4 since patch#5 didn't apply cleanly when I started reviewing. KMS#reencryptEncryptedKeys() Do we need to increment AdminCallsMeter ? Its not clear from the class which calls are supposed to be admin and which ones are non-admin. Since we are planning to create a new thread in namenode for batch re-encrypting, I think it would be admin call. Please comment otherwise. We can remove the try catch block around user.doAs context and let the outer try catch block handle the exception propagating from the doAs call. KeyAuthorizationKeyProvider#reencryptEncryptedKeys(List<EncryptedKeyVersion> ekvs) The following chunk of code is redundant. if (keyName == null) { keyName = ekv.getEncryptionKeyName(); } else { if (!keyName.equals(ekv.getEncryptionKeyName())) { throw new IllegalArgumentException(String.format( "multiple keyname '%s' '%s' found for reencryptEncryptedKeys", keyName, ekv.getEncryptionKeyName())); } We already do this check in KMS#reencryptEncryptedKeys KMSUtil.java parseJSONEncKeyVersions Why are we using LinkedList ? We are just traversing the list using listIerator() and caling Iterator#set(). set() operation is O(1) in both cases. The memory footprint is more in LinkedList compared to ArrayList. toJSON(KeyProvider.KeyVersion keyVersion) and toJSON(EncryptedKeyVersion encryptedKeyVersion) are still using LinkedHashMap. We can make it HashMap. TestKeyProviderCryptoExtension#testReencryptEncryptedKeys() Minor nit: assertEquals("Version name of EEK should be EEK", KeyProviderCryptoExtension.EEK, ekv.getEncryptedKeyVersion().getVersionName()); The error message should be "Version name should be EEK" Another minor nit. assertEquals("Name of EEK should be encryption key name", Something doesn't seems right with this error message. According to me, it should be EEK's key name should be fookey. Following code. if (Arrays.equals(ekv.getEncryptedKeyVersion().getMaterial(), encryptionKey.getMaterial())) { fail("Encrypted key material should not equal decrypted key material"); } Instead you can use Assert.assertNotEquals("Encrypted key material should not equal decrypted key material", new String(ekv.getEncryptedKeyVersion().getMaterial(),new String(encryptionKey.getMaterial())) Following code. encryptionKey = kp.createKey(ENCRYPTION_KEY_NAME, SecureRandom.getSeed(16), options); final KeyVersion kv = kpExt.decryptEncryptedKey(ekv); // Following lines are in patch. if (Arrays.equals(kv.getMaterial(), encryptionKey.getMaterial())) { fail("Encrypted key material should not equal encryption key material"); } This comparison didn't made sense to me. This is comparing the secret that is stored in the backend with the material that is being generated while calling decryptEncryptedKey() These both are bound to be different. I think we are comparing apples to oranges. I would like to see the following test case. The material returned after decrypting original ekv and decrypting new ekv should be same. for (int i = 0; i < ekvs.size(); ++i) { final EncryptedKeyVersion ekv = ekvs.get(i); final EncryptedKeyVersion orig = ekvsOrig.get(i); KeyVersion decryptedEkv = kpExt.decryptEncryptedKey(ekv); KeyVersion origEkv = kpExt.decryptEncryptedKey(orig); Assert.equals(decryptedEkv.getMaterial(), origEkv.getMaerial()); }
          Hide
          xiaochen Xiao Chen added a comment -

          Attaching a patch that has both patch 5, and HADOOP-14779 for review.
          Discussed Wei-Chiu Chuang who prefers to have both of the changes incorporated in 1 jira.

          Show
          xiaochen Xiao Chen added a comment - Attaching a patch that has both patch 5, and HADOOP-14779 for review. Discussed Wei-Chiu Chuang who prefers to have both of the changes incorporated in 1 jira.
          Hide
          xiaochen Xiao Chen added a comment -

          It is based on HADOOP-14779, and optimizes reencryptEncryptedKeys to use the same codec (and en/de-cryptor), and only getCurrentKey once from the key name.

          Thanks for looking. As noted, could you please apply HADOOP-14779, then patch 5 here? Only reason to separate out HADOOP-14779 is for cleanness.

          Show
          xiaochen Xiao Chen added a comment - It is based on HADOOP-14779 , and optimizes reencryptEncryptedKeys to use the same codec (and en/de-cryptor), and only getCurrentKey once from the key name. Thanks for looking. As noted , could you please apply HADOOP-14779 , then patch 5 here? Only reason to separate out HADOOP-14779 is for cleanness.
          Hide
          shahrs87 Rushabh S Shah added a comment -

          The patch 5 doesn't apply cleanly on trunk specifically KeyProviderCryptoExtension.java.
          Xiao Chen: Can you please rebase and then submit the patch ?

          Show
          shahrs87 Rushabh S Shah added a comment - The patch 5 doesn't apply cleanly on trunk specifically KeyProviderCryptoExtension.java . Xiao Chen : Can you please rebase and then submit the patch ?
          Hide
          xiaochen Xiao Chen added a comment -

          Sorry figured I omitted the comment about stopwatch.

          Also, does it return time in milliseconds? Can you add the time unit into log message as well?

          From recent debugging experience, I think it makes sense to add this message to info level, so we can easily find how the underlying keyprovider performed in the hind sight.
          The stopwatch used is a com.google.common.base.Stopwatch, and can conveniently handle units dynamically in toString. A sample log reads this:

          ... reencryptEncryptedKeys 1000 keys for key k1 took 51.64 ms

          Show
          xiaochen Xiao Chen added a comment - Sorry figured I omitted the comment about stopwatch. Also, does it return time in milliseconds? Can you add the time unit into log message as well? From recent debugging experience, I think it makes sense to add this message to info level, so we can easily find how the underlying keyprovider performed in the hind sight. The stopwatch used is a com.google.common.base.Stopwatch , and can conveniently handle units dynamically in toString. A sample log reads this: ... reencryptEncryptedKeys 1000 keys for key k1 took 51.64 ms
          Hide
          xiaochen Xiao Chen added a comment -

          Thanks for the heads up Rushabh S Shah, good to hear!

          Show
          xiaochen Xiao Chen added a comment - Thanks for the heads up Rushabh S Shah , good to hear!
          Hide
          shahrs87 Rushabh S Shah added a comment -

          Will review server side changes later.

          Will review the server side changes today or by maximum tomorrow.
          Thanks!

          Show
          shahrs87 Rushabh S Shah added a comment - Will review server side changes later. Will review the server side changes today or by maximum tomorrow. Thanks!
          Hide
          xiaochen Xiao Chen added a comment -

          Patch 5 for more optimization based on internal testing.
          It is based on HADOOP-14779, and optimizes reencryptEncryptedKeys to use the same codec (and en/de-cryptor), and only getCurrentKey once from the key name.

          Show
          xiaochen Xiao Chen added a comment - Patch 5 for more optimization based on internal testing. It is based on HADOOP-14779 , and optimizes reencryptEncryptedKeys to use the same codec (and en/de-cryptor), and only getCurrentKey once from the key name.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 16s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
                trunk Compile Tests
          0 mvndep 0m 19s Maven dependency ordering for branch
          +1 mvninstall 13m 59s trunk passed
          +1 compile 14m 42s trunk passed
          +1 checkstyle 0m 44s trunk passed
          +1 mvnsite 2m 18s trunk passed
          +1 findbugs 1m 59s trunk passed
          +1 javadoc 1m 10s trunk passed
                Patch Compile Tests
          0 mvndep 0m 9s Maven dependency ordering for patch
          +1 mvninstall 0m 54s the patch passed
          +1 compile 12m 2s the patch passed
          +1 javac 12m 2s the patch passed
          -0 checkstyle 0m 41s hadoop-common-project: The patch generated 1 new + 150 unchanged - 2 fixed = 151 total (was 152)
          +1 mvnsite 2m 16s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 2m 22s the patch passed
          +1 javadoc 1m 18s the patch passed
                Other Tests
          -1 unit 8m 40s hadoop-common in the patch failed.
          +1 unit 3m 10s hadoop-kms in the patch passed.
          +1 asflicense 0m 29s The patch does not generate ASF License warnings.
          71m 40s



          Reason Tests
          Failed junit tests hadoop.ipc.TestRPC
            hadoop.security.TestKDiag



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HADOOP-14705
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882082/HADOOP-14705.04.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux dc4468269186 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 588c190
          Default Java 1.8.0_144
          findbugs v3.1.0-RC1
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13041/artifact/patchprocess/diff-checkstyle-hadoop-common-project.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13041/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13041/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13041/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.       trunk Compile Tests 0 mvndep 0m 19s Maven dependency ordering for branch +1 mvninstall 13m 59s trunk passed +1 compile 14m 42s trunk passed +1 checkstyle 0m 44s trunk passed +1 mvnsite 2m 18s trunk passed +1 findbugs 1m 59s trunk passed +1 javadoc 1m 10s trunk passed       Patch Compile Tests 0 mvndep 0m 9s Maven dependency ordering for patch +1 mvninstall 0m 54s the patch passed +1 compile 12m 2s the patch passed +1 javac 12m 2s the patch passed -0 checkstyle 0m 41s hadoop-common-project: The patch generated 1 new + 150 unchanged - 2 fixed = 151 total (was 152) +1 mvnsite 2m 16s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 22s the patch passed +1 javadoc 1m 18s the patch passed       Other Tests -1 unit 8m 40s hadoop-common in the patch failed. +1 unit 3m 10s hadoop-kms in the patch passed. +1 asflicense 0m 29s The patch does not generate ASF License warnings. 71m 40s Reason Tests Failed junit tests hadoop.ipc.TestRPC   hadoop.security.TestKDiag Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HADOOP-14705 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882082/HADOOP-14705.04.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux dc4468269186 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 588c190 Default Java 1.8.0_144 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13041/artifact/patchprocess/diff-checkstyle-hadoop-common-project.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13041/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13041/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13041/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xiaochen Xiao Chen added a comment -

          Thanks Wei-Chiu for reviewing, good comments! Patch 4 to address all comments, with explanations / exception below.

          if all ekv are the same, wouldn’t it be more efficient to optimize it somehow?

          Yes and good catch. Client side has a bug, should not have added keyName to the json.
          The request looks like below, keyName should only be on the url path,

          POST http://HOST:PORT/kms/v1/key/<key-name>/_reencryptbatch
              Content-Type: application/json
          
              [
                {
                  "versionName"         : "<encryptionVersionName>",
                  "iv"                  : "<iv>",            //base64
                  "encryptedKeyVersion" : {
                      "versionName"       : "EEK",
                      "material"          : "<material>",    //base64
                  }
                },
                {
                  "versionName"         : "<encryptionVersionName>",
                  "iv"                  : "<iv>",            //base64
                  "encryptedKeyVersion" : {
                      "versionName"       : "EEK",
                      "material"          : "<material>",    //base64
                  }
                },
                ...
              ]
          

          should the last parameter be Map.class?

          The response is a List<Map>, hence List.class.

          Question: is there a practical size limit for a KMS request?

          Not on the request itself, but the client sending it and the server receiving it both need to be able to hold and parse it. As it turned out from HDFS-10899, bigger than 2k may trigger edit log sync and impact performance.
          For KMS here, I added a static 10k maxNumPerBatch as a safeguard too. Security-wise okay be cause ACL is checked before iterating through the json payload.

          Show
          xiaochen Xiao Chen added a comment - Thanks Wei-Chiu for reviewing, good comments! Patch 4 to address all comments, with explanations / exception below. if all ekv are the same, wouldn’t it be more efficient to optimize it somehow? Yes and good catch. Client side has a bug, should not have added keyName to the json. The request looks like below, keyName should only be on the url path, POST http://HOST:PORT/kms/v1/key/<key-name>/_reencryptbatch Content-Type: application/json [ { "versionName" : "<encryptionVersionName>", "iv" : "<iv>", //base64 "encryptedKeyVersion" : { "versionName" : "EEK", "material" : "<material>", //base64 } }, { "versionName" : "<encryptionVersionName>", "iv" : "<iv>", //base64 "encryptedKeyVersion" : { "versionName" : "EEK", "material" : "<material>", //base64 } }, ... ] should the last parameter be Map.class? The response is a List<Map>, hence List.class. Question: is there a practical size limit for a KMS request? Not on the request itself, but the client sending it and the server receiving it both need to be able to hold and parse it. As it turned out from HDFS-10899 , bigger than 2k may trigger edit log sync and impact performance. For KMS here, I added a static 10k maxNumPerBatch as a safeguard too. Security-wise okay be cause ACL is checked before iterating through the json payload.
          Hide
          jojochuang Wei-Chiu Chuang added a comment - - edited

          Hi Xiao Chen thanks a lot for taking up this work. I reviewed the patch and have a few comments listed below:

          KMSClientProvider#reencryptEncryptedKeys
          final List jsonPayload = new ArrayList();
          

          should be final List<Map> jsonPayload = new ArrayList<Map>();

          if (keyName == null) {
  checkNotNull(ekv.getEncryptionKeyName(), "ekv.getEncryptionKeyName");
          

          The check is redundant

          jsonPayload.add(KMSUtil.toJSON(ekv, ekv.getEncryptionKeyName()));
          

          if all ekv are the same, wouldn’t it be more efficient to optimize it somehow?

          final List<Map> response =
    call(conn, jsonPayload, HttpURLConnection.HTTP_OK, List.class);
          

          should the last parameter be Map.class?

          Question: is there a practical size limit for a KMS request?

          TestKMS
          fail("Should not be able to reencryptEncryptedKeys");
          

          —> grammatical error: Should not have been

          KMS
          kmsAudit.ok(user, KMSOp.REENCRYPT_EEK_BATCH, name, "");
          

          I wonder if it makes sense to log the size of the batch in extraMsg.

          KMS
          if (LOG.isDebugEnabled()) {
  LOG.debug("reencryptEncryptedKeys {} keys for key {} took {}",
      jsonPayload.size(), name, sw.stop());
}
          

          It looks like a bad practice (for fear of resource leakage) to me that the StopWatch is only stopped if debug log is enabled.
          Also, does it return time in milliseconds? Can you add the time unit into log message as well?

          This is irrelevant to this patch, but there are a number of places in KMS where Object references are used unnecessarily:

          Object retJSON;
          …
          retJSON = new ArrayList();
for (EncryptedKeyVersion edek : retEdeks) {
  ((ArrayList) retJSON).add(KMSUtil.toJSON(edek));
}
          
          
          Show
          jojochuang Wei-Chiu Chuang added a comment - - edited Hi Xiao Chen thanks a lot for taking up this work. I reviewed the patch and have a few comments listed below: KMSClientProvider#reencryptEncryptedKeys final List jsonPayload = new ArrayList(); should be final List<Map> jsonPayload = new ArrayList<Map>(); if (keyName == null ) {
 checkNotNull(ekv.getEncryptionKeyName(), "ekv.getEncryptionKeyName" ); The check is redundant jsonPayload.add(KMSUtil.toJSON(ekv, ekv.getEncryptionKeyName())); if all ekv are the same, wouldn’t it be more efficient to optimize it somehow? final List<Map> response =
 call(conn, jsonPayload, HttpURLConnection.HTTP_OK, List.class); should the last parameter be Map.class? Question: is there a practical size limit for a KMS request? TestKMS fail( "Should not be able to reencryptEncryptedKeys" ); —> grammatical error: Should not have been KMS kmsAudit.ok(user, KMSOp.REENCRYPT_EEK_BATCH, name, ""); I wonder if it makes sense to log the size of the batch in extraMsg. KMS if (LOG.isDebugEnabled()) {
 LOG.debug( "reencryptEncryptedKeys {} keys for key {} took {}" ,
 jsonPayload.size(), name, sw.stop());
} It looks like a bad practice (for fear of resource leakage) to me that the StopWatch is only stopped if debug log is enabled. Also, does it return time in milliseconds? Can you add the time unit into log message as well? This is irrelevant to this patch, but there are a number of places in KMS where Object references are used unnecessarily: Object retJSON; … retJSON = new ArrayList();
 for (EncryptedKeyVersion edek : retEdeks) {
 ((ArrayList) retJSON).add(KMSUtil.toJSON(edek));
}
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
                trunk Compile Tests
          0 mvndep 0m 8s Maven dependency ordering for branch
          +1 mvninstall 14m 8s trunk passed
          +1 compile 15m 24s trunk passed
          +1 checkstyle 0m 44s trunk passed
          +1 mvnsite 2m 24s trunk passed
          +1 findbugs 2m 4s trunk passed
          +1 javadoc 1m 9s trunk passed
                Patch Compile Tests
          0 mvndep 0m 8s Maven dependency ordering for patch
          +1 mvninstall 1m 4s the patch passed
          +1 compile 11m 34s the patch passed
          +1 javac 11m 34s the patch passed
          +1 checkstyle 0m 40s hadoop-common-project: The patch generated 0 new + 151 unchanged - 2 fixed = 151 total (was 153)
          +1 mvnsite 2m 20s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 2m 30s the patch passed
          -1 javadoc 0m 57s hadoop-common-project_hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
                Other Tests
          -1 unit 9m 6s hadoop-common in the patch failed.
          +1 unit 3m 5s hadoop-kms in the patch passed.
          +1 asflicense 0m 30s The patch does not generate ASF License warnings.
          72m 41s



          Reason Tests
          Failed junit tests hadoop.ha.TestZKFailoverController
            hadoop.security.TestKDiag



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HADOOP-14705
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880786/HADOOP-14705.03.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 800ff8c0560a 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 55a181f
          Default Java 1.8.0_131
          findbugs v3.1.0-RC1
          javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/12979/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/12979/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/12979/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/12979/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.       trunk Compile Tests 0 mvndep 0m 8s Maven dependency ordering for branch +1 mvninstall 14m 8s trunk passed +1 compile 15m 24s trunk passed +1 checkstyle 0m 44s trunk passed +1 mvnsite 2m 24s trunk passed +1 findbugs 2m 4s trunk passed +1 javadoc 1m 9s trunk passed       Patch Compile Tests 0 mvndep 0m 8s Maven dependency ordering for patch +1 mvninstall 1m 4s the patch passed +1 compile 11m 34s the patch passed +1 javac 11m 34s the patch passed +1 checkstyle 0m 40s hadoop-common-project: The patch generated 0 new + 151 unchanged - 2 fixed = 151 total (was 153) +1 mvnsite 2m 20s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 30s the patch passed -1 javadoc 0m 57s hadoop-common-project_hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)       Other Tests -1 unit 9m 6s hadoop-common in the patch failed. +1 unit 3m 5s hadoop-kms in the patch passed. +1 asflicense 0m 30s The patch does not generate ASF License warnings. 72m 41s Reason Tests Failed junit tests hadoop.ha.TestZKFailoverController   hadoop.security.TestKDiag Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HADOOP-14705 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880786/HADOOP-14705.03.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 800ff8c0560a 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 55a181f Default Java 1.8.0_131 findbugs v3.1.0-RC1 javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/12979/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/12979/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/12979/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/12979/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
                trunk Compile Tests
          0 mvndep 0m 7s Maven dependency ordering for branch
          +1 mvninstall 14m 47s trunk passed
          +1 compile 14m 41s trunk passed
          +1 checkstyle 0m 40s trunk passed
          +1 mvnsite 2m 28s trunk passed
          +1 findbugs 2m 15s trunk passed
          +1 javadoc 1m 13s trunk passed
                Patch Compile Tests
          0 mvndep 0m 8s Maven dependency ordering for patch
          +1 mvninstall 1m 1s the patch passed
          +1 compile 10m 52s the patch passed
          +1 javac 10m 52s the patch passed
          -0 checkstyle 0m 41s hadoop-common-project: The patch generated 6 new + 151 unchanged - 2 fixed = 157 total (was 153)
          +1 mvnsite 2m 16s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          -1 findbugs 1m 36s hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
          -1 javadoc 0m 52s hadoop-common-project_hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
                Other Tests
          -1 unit 8m 17s hadoop-common in the patch failed.
          +1 unit 3m 2s hadoop-kms in the patch passed.
          +1 asflicense 0m 29s The patch does not generate ASF License warnings.
          70m 53s



          Reason Tests
          FindBugs module:hadoop-common-project/hadoop-common
            Nullcheck of keyName at line 118 of value previously dereferenced in org.apache.hadoop.util.KMSUtil.toJSON(KeyProviderCryptoExtension$EncryptedKeyVersion, String) At KMSUtil.java:118 of value previously dereferenced in org.apache.hadoop.util.KMSUtil.toJSON(KeyProviderCryptoExtension$EncryptedKeyVersion, String) At KMSUtil.java:[line 116]
          Failed junit tests hadoop.security.TestKDiag



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HADOOP-14705
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880779/HADOOP-14705.03.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 51848e681b8a 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 8d3fd81
          Default Java 1.8.0_131
          findbugs v3.1.0-RC1
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/12978/artifact/patchprocess/diff-checkstyle-hadoop-common-project.txt
          findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/12978/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
          javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/12978/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/12978/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/12978/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/12978/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.       trunk Compile Tests 0 mvndep 0m 7s Maven dependency ordering for branch +1 mvninstall 14m 47s trunk passed +1 compile 14m 41s trunk passed +1 checkstyle 0m 40s trunk passed +1 mvnsite 2m 28s trunk passed +1 findbugs 2m 15s trunk passed +1 javadoc 1m 13s trunk passed       Patch Compile Tests 0 mvndep 0m 8s Maven dependency ordering for patch +1 mvninstall 1m 1s the patch passed +1 compile 10m 52s the patch passed +1 javac 10m 52s the patch passed -0 checkstyle 0m 41s hadoop-common-project: The patch generated 6 new + 151 unchanged - 2 fixed = 157 total (was 153) +1 mvnsite 2m 16s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. -1 findbugs 1m 36s hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) -1 javadoc 0m 52s hadoop-common-project_hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)       Other Tests -1 unit 8m 17s hadoop-common in the patch failed. +1 unit 3m 2s hadoop-kms in the patch passed. +1 asflicense 0m 29s The patch does not generate ASF License warnings. 70m 53s Reason Tests FindBugs module:hadoop-common-project/hadoop-common   Nullcheck of keyName at line 118 of value previously dereferenced in org.apache.hadoop.util.KMSUtil.toJSON(KeyProviderCryptoExtension$EncryptedKeyVersion, String) At KMSUtil.java:118 of value previously dereferenced in org.apache.hadoop.util.KMSUtil.toJSON(KeyProviderCryptoExtension$EncryptedKeyVersion, String) At KMSUtil.java: [line 116] Failed junit tests hadoop.security.TestKDiag Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HADOOP-14705 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880779/HADOOP-14705.03.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 51848e681b8a 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 8d3fd81 Default Java 1.8.0_131 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/12978/artifact/patchprocess/diff-checkstyle-hadoop-common-project.txt findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/12978/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/12978/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/12978/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/12978/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/12978/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xiaochen Xiao Chen added a comment - - edited

          Patch 3 to address all other comments from Rushabh S Shah.
          Also moved more static functions to KMSUtil, and added checks for nullity on the params - I think it makes sense to do the additional check now that it's moved to util. Existing behaviors of these methods are otherwise not changed.

          Show
          xiaochen Xiao Chen added a comment - - edited Patch 3 to address all other comments from Rushabh S Shah . Also moved more static functions to KMSUtil , and added checks for nullity on the params - I think it makes sense to do the additional check now that it's moved to util. Existing behaviors of these methods are otherwise not changed.
          Hide
          xiaochen Xiao Chen added a comment -

          Thanks for reviewing Rushabh S Shah. Will address other comments, but want to discuss on these 2:

          1. It would be better if we don't fail if one EDEK fails to process.

          Also thought about this, but didn't do for these reasons:

          • it's hard to throw a clear exception to tell what went wrong on the failures. (i.e. throw first, or last, or some aggregated exception that contains all?)
          • I'm not sure if a partially re-encrypted batch is useful to the clients. For namenode, it'll make tracking harder so NN will call again.
          • Complexity on both the server (need to catch exception in the middle and continue the rest of the keys, potentially keeping the some of the exceptions for the final throw) and client (need to look up each key and logically handle differently, depending on whether the return is null)

          4. KMSUtil.java Why do we need to use LinkedHashMap ?

          I'm also not 100% sure why LinkedHashMap is required, probably not.
          This patch is just moving the existing util methods, so I'd like to leave this change to a new jira for cleanness.

          Show
          xiaochen Xiao Chen added a comment - Thanks for reviewing Rushabh S Shah . Will address other comments, but want to discuss on these 2: 1. It would be better if we don't fail if one EDEK fails to process. Also thought about this, but didn't do for these reasons: it's hard to throw a clear exception to tell what went wrong on the failures. (i.e. throw first, or last, or some aggregated exception that contains all?) I'm not sure if a partially re-encrypted batch is useful to the clients. For namenode, it'll make tracking harder so NN will call again. Complexity on both the server (need to catch exception in the middle and continue the rest of the keys, potentially keeping the some of the exceptions for the final throw) and client (need to look up each key and logically handle differently, depending on whether the return is null) 4. KMSUtil.java Why do we need to use LinkedHashMap ? I'm also not 100% sure why LinkedHashMap is required, probably not. This patch is just moving the existing util methods, so I'd like to leave this change to a new jira for cleanness.
          Hide
          shahrs87 Rushabh S Shah added a comment -

          Thanks Xiao Chen for working on this.
          Seems like a good change.

          I have some review comments from the client side.
          Will review server side changes later.

          1. If I understand the patch correctly, this server request will be either All or None.
          I haven't seen HDFS-10899, so I might miss some context.
          Either all the EDEKs within the list will be re-encrypted or if anyone of ekv encounters Exception, then the whole call will fail.
          It would be better if we don't fail if one EDEK fails to process.
          We can return Map<originalEKV, re-encryptedEKV>. In case of failed operations, we can just set null and traverse the map on the client side to see which ekv's failed.

          2. KeyProviderCryptoExtension.java
          Instead of creating CryptoCodec and Encryptor for every ekv, we can create it just once.
          I don't think it stores some state related to each ekv.
          Also we need to close CryptoCodec instance.
          I would use try with resources block.

          3. KMSClientProvider.java
          In reencryptEncryptedKeys method, we need to add null check also along with zero length array.
          if (ekvs == null || ekvs.isEmpty())

          4. KMSUtil.java
          toJSON(EncryptedKeyVersion encryptedKeyVersion) and toJSON(KeyProvider.KeyVersion keyVersion)
          Why do we need to use LinkedHashMap ?
          I don't think in the server side, while iterating over the map, we care about the insertion order.

          toJSON(EncryptedKeyVersion encryptedKeyVersion, String keyName)
          Will the keyName will ever be null ?
          We do a "not null" check in the calling method KMSClientProvider#reencryptEncryptedKeys(List<EncryptedKeyVersion> ekvs)

          checkNotNull and checkNotEmpty
          Both of these methods are actually utility methods. I would rather see this in KMSUtil instead of KMSClientProvider

          Show
          shahrs87 Rushabh S Shah added a comment - Thanks Xiao Chen for working on this. Seems like a good change. I have some review comments from the client side. Will review server side changes later. 1. If I understand the patch correctly, this server request will be either All or None. I haven't seen HDFS-10899 , so I might miss some context. Either all the EDEKs within the list will be re-encrypted or if anyone of ekv encounters Exception, then the whole call will fail. It would be better if we don't fail if one EDEK fails to process. We can return Map<originalEKV, re-encryptedEKV>. In case of failed operations, we can just set null and traverse the map on the client side to see which ekv's failed. 2. KeyProviderCryptoExtension.java Instead of creating CryptoCodec and Encryptor for every ekv, we can create it just once. I don't think it stores some state related to each ekv . Also we need to close CryptoCodec instance. I would use try with resources block. 3. KMSClientProvider.java In reencryptEncryptedKeys method, we need to add null check also along with zero length array. if (ekvs == null || ekvs.isEmpty()) 4. KMSUtil.java toJSON(EncryptedKeyVersion encryptedKeyVersion) and toJSON(KeyProvider.KeyVersion keyVersion) Why do we need to use LinkedHashMap ? I don't think in the server side, while iterating over the map, we care about the insertion order. toJSON(EncryptedKeyVersion encryptedKeyVersion, String keyName) Will the keyName will ever be null ? We do a "not null" check in the calling method KMSClientProvider#reencryptEncryptedKeys(List<EncryptedKeyVersion> ekvs) checkNotNull and checkNotEmpty Both of these methods are actually utility methods. I would rather see this in KMSUtil instead of KMSClientProvider
          Hide
          xiaochen Xiao Chen added a comment -

          Patch 2 ready for review.

          • Added a test for crypto extension.
          • Partly reuse existing toJSON methods in KMSUtil.
          • Cosmetic updates.
          Show
          xiaochen Xiao Chen added a comment - Patch 2 ready for review. Added a test for crypto extension. Partly reuse existing toJSON methods in KMSUtil . Cosmetic updates.
          Hide
          xiaochen Xiao Chen added a comment -

          Patch 1 to express the idea and for high-level review.

          Will add more test cases, and look to see if some of the util methods can be de-dup'ed.

          Show
          xiaochen Xiao Chen added a comment - Patch 1 to express the idea and for high-level review. Will add more test cases, and look to see if some of the util methods can be de-dup'ed.

            People

            • Assignee:
              xiaochen Xiao Chen
              Reporter:
              xiaochen Xiao Chen
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development