Hadoop Common
  1. Hadoop Common
  2. HADOOP-10720

KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 3.0.0
    • Fix Version/s: 2.6.0
    • Component/s: security
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      KMS client/server should implement support for generating encrypted keys and decrypting them via the REST API being introduced by HADOOP-10719.

      1. HADOOP-10720.patch
        16 kB
        Alejandro Abdelnur
      2. COMBO.patch
        31 kB
        Alejandro Abdelnur
      3. HADOOP-10720.patch
        17 kB
        Alejandro Abdelnur
      4. COMBO.patch
        32 kB
        Alejandro Abdelnur
      5. HADOOP-10720.patch
        19 kB
        Alejandro Abdelnur
      6. COMBO.patch
        32 kB
        Alejandro Abdelnur
      7. HADOOP-10720.patch
        19 kB
        Alejandro Abdelnur
      8. COMBO.patch
        32 kB
        Alejandro Abdelnur
      9. HADOOP-10720.patch
        16 kB
        Alejandro Abdelnur
      10. COMBO.patch
        27 kB
        Alejandro Abdelnur
      11. HADOOP-10720.1.patch
        30 kB
        Arun Suresh
      12. HADOOP-10720.2.patch
        32 kB
        Arun Suresh
      13. HADOOP-10720.3.patch
        48 kB
        Arun Suresh
      14. HADOOP-10720.4.patch
        50 kB
        Arun Suresh
      15. HADOOP-10720.5.patch
        50 kB
        Arun Suresh
      16. HADOOP-10720.6.patch
        61 kB
        Arun Suresh
      17. HADOOP-10720.7.patch
        62 kB
        Arun Suresh
      18. HADOOP-10720.8.patch
        63 kB
        Arun Suresh
      19. HADOOP-10720.9.patch
        63 kB
        Arun Suresh
      20. HADOOP-10720.10.patch
        62 kB
        Arun Suresh
      21. HADOOP-10720.11.patch
        62 kB
        Arun Suresh
      22. HADOOP-10720.12.patch
        69 kB
        Arun Suresh
      23. HADOOP-10720.13.patch
        68 kB
        Arun Suresh
      24. HADOOP-10720-10750.COMBO.patch
        98 kB
        Arun Suresh
      25. HADOOP-10720.14.patch
        68 kB
        Arun Suresh
      26. HADOOP-10720.15.patch
        71 kB
        Arun Suresh
      27. HADOOP-10720.16.patch
        71 kB
        Arun Suresh
      28. HADOOP-10720.17.patch
        80 kB
        Arun Suresh
      29. HADOOP-10720.18.patch
        80 kB
        Arun Suresh
      30. HADOOP-10720.19.patch
        68 kB
        Arun Suresh
      31. HADOOP-10720.20.patch
        68 kB
        Arun Suresh

        Issue Links

          Activity

          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1839 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1839/)
          HADOOP-10720. KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API. (asuresh via tucu) (tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612399)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSRESTConstants.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/ValueQueue.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestValueQueue.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/conf/kms-acls.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/EagerKeyGeneratorKeyProviderCryptoExtension.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSServerJSONUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1839 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1839/ ) HADOOP-10720 . KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API. (asuresh via tucu) (tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612399 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSRESTConstants.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/ValueQueue.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestValueQueue.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/conf/kms-acls.xml /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/EagerKeyGeneratorKeyProviderCryptoExtension.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSServerJSONUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #1812 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1812/)
          HADOOP-10720. KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API. (asuresh via tucu) (tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612399)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSRESTConstants.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/ValueQueue.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestValueQueue.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/conf/kms-acls.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/EagerKeyGeneratorKeyProviderCryptoExtension.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSServerJSONUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #1812 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1812/ ) HADOOP-10720 . KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API. (asuresh via tucu) (tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612399 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSRESTConstants.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/ValueQueue.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestValueQueue.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/conf/kms-acls.xml /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/EagerKeyGeneratorKeyProviderCryptoExtension.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSServerJSONUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #620 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/620/)
          HADOOP-10720. KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API. (asuresh via tucu) (tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612399)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSRESTConstants.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/ValueQueue.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestValueQueue.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/conf/kms-acls.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/EagerKeyGeneratorKeyProviderCryptoExtension.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSServerJSONUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #620 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/620/ ) HADOOP-10720 . KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API. (asuresh via tucu) (tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612399 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSRESTConstants.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/ValueQueue.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestValueQueue.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/conf/kms-acls.xml /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/EagerKeyGeneratorKeyProviderCryptoExtension.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSServerJSONUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-trunk-Commit #5925 (See https://builds.apache.org/job/Hadoop-trunk-Commit/5925/)
          HADOOP-10720. KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API. (asuresh via tucu) (tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612399)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSRESTConstants.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/ValueQueue.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestValueQueue.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/conf/kms-acls.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/EagerKeyGeneratorKeyProviderCryptoExtension.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSServerJSONUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
          • /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-trunk-Commit #5925 (See https://builds.apache.org/job/Hadoop-trunk-Commit/5925/ ) HADOOP-10720 . KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API. (asuresh via tucu) (tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612399 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSRESTConstants.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/ValueQueue.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestValueQueue.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/conf/kms-acls.xml /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/EagerKeyGeneratorKeyProviderCryptoExtension.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSServerJSONUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm /hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
          Hide
          Alejandro Abdelnur added a comment -

          Arun, thanks for putting up with all the backs and forths during the review iterations. Committed to trunk.

          Show
          Alejandro Abdelnur added a comment - Arun, thanks for putting up with all the backs and forths during the review iterations. Committed to trunk.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12656927/HADOOP-10720.20.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

          org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem
          org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
          org.apache.hadoop.ipc.TestIPC

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4330//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4330//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12656927/HADOOP-10720.20.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms: org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem org.apache.hadoop.fs.TestSymlinkLocalFSFileContext org.apache.hadoop.ipc.TestIPC +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4330//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4330//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment -

          +1 pending jenkins.

          Show
          Alejandro Abdelnur added a comment - +1 pending jenkins.
          Hide
          Arun Suresh added a comment -

          Fixing javadoc issues

          Show
          Arun Suresh added a comment - Fixing javadoc issues
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12656914/HADOOP-10720.19.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 javadoc. The javadoc tool appears to have generated 3 warning messages.
          See https://builds.apache.org/job/PreCommit-HADOOP-Build/4328//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

          org.apache.hadoop.fs.TestDFVariations
          org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem
          org.apache.hadoop.ipc.TestIPC
          org.apache.hadoop.fs.TestSymlinkLocalFSFileContext

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4328//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4328//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12656914/HADOOP-10720.19.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 javadoc . The javadoc tool appears to have generated 3 warning messages. See https://builds.apache.org/job/PreCommit-HADOOP-Build/4328//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms: org.apache.hadoop.fs.TestDFVariations org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem org.apache.hadoop.ipc.TestIPC org.apache.hadoop.fs.TestSymlinkLocalFSFileContext +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4328//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4328//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Minor refactoring based on Alejandro Abdelnur suggestions

          Show
          Arun Suresh added a comment - Minor refactoring based on Alejandro Abdelnur suggestions
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12656704/HADOOP-10720.18.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

          org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
          org.apache.hadoop.ipc.TestIPC
          org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4323//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4323//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12656704/HADOOP-10720.18.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms: org.apache.hadoop.fs.TestSymlinkLocalFSFileContext org.apache.hadoop.ipc.TestIPC org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4323//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4323//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12656703/HADOOP-10720.17.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.
          See https://builds.apache.org/job/PreCommit-HADOOP-Build/4322//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

          org.apache.hadoop.ha.TestZKFailoverControllerStress
          org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
          org.apache.hadoop.ipc.TestIPC
          org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4322//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4322//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12656703/HADOOP-10720.17.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 javadoc . The javadoc tool appears to have generated 1 warning messages. See https://builds.apache.org/job/PreCommit-HADOOP-Build/4322//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms: org.apache.hadoop.ha.TestZKFailoverControllerStress org.apache.hadoop.fs.TestSymlinkLocalFSFileContext org.apache.hadoop.ipc.TestIPC org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4322//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4322//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Updating with feedback comments

          Show
          Arun Suresh added a comment - Updating with feedback comments
          Hide
          Alejandro Abdelnur added a comment -

          The refactoring into a crypto extension is good idea, I like it.

          The constant KEY_CACHE_PREFIX could be renamed to CONFIG_PREFIX. Its value, to be consistent with other KMS server configs, should be "hadoop.kms.encrypted.key.cache.". We should adde these configs to the KMS documentation page.

          The setKeyProvider() in the Extension interface is defining a contract on how the KeyProvider is passed to the extension, we should honor that contract in all extension creations (instead passing it in the constructor of the default extensions).

          In the KeyProviderCryptoExtension, the createKeyProviderCryptoExtension(KeyProvider keyProvider) method should delegate to the other signature of the method.

          Show
          Alejandro Abdelnur added a comment - The refactoring into a crypto extension is good idea, I like it. The constant KEY_CACHE_PREFIX could be renamed to CONFIG_PREFIX . Its value, to be consistent with other KMS server configs, should be "hadoop.kms.encrypted.key.cache." . We should adde these configs to the KMS documentation page. The setKeyProvider() in the Extension interface is defining a contract on how the KeyProvider is passed to the extension, we should honor that contract in all extension creations (instead passing it in the constructor of the default extensions). In the KeyProviderCryptoExtension , the createKeyProviderCryptoExtension(KeyProvider keyProvider) method should delegate to the other signature of the method.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12656682/HADOOP-10720.16.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

          org.apache.hadoop.fs.shell.TestCopyPreserveFlag
          org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
          org.apache.hadoop.fs.shell.TestTextCommand
          org.apache.hadoop.ipc.TestIPC
          org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem
          org.apache.hadoop.fs.shell.TestPathData
          org.apache.hadoop.fs.TestDFVariations

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4321//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4321//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12656682/HADOOP-10720.16.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms: org.apache.hadoop.fs.shell.TestCopyPreserveFlag org.apache.hadoop.fs.TestSymlinkLocalFSFileContext org.apache.hadoop.fs.shell.TestTextCommand org.apache.hadoop.ipc.TestIPC org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem org.apache.hadoop.fs.shell.TestPathData org.apache.hadoop.fs.TestDFVariations +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4321//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4321//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12656606/HADOOP-10720.14.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

          org.apache.hadoop.ipc.TestIPC
          org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem
          org.apache.hadoop.fs.TestSymlinkLocalFSFileContext

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4319//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4319//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12656606/HADOOP-10720.14.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms: org.apache.hadoop.ipc.TestIPC org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem org.apache.hadoop.fs.TestSymlinkLocalFSFileContext +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4319//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4319//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Minor refactor

          Show
          Arun Suresh added a comment - Minor refactor
          Hide
          Arun Suresh added a comment -

          Updated patch with Minor Refactoring

          Show
          Arun Suresh added a comment - Updated patch with Minor Refactoring
          Hide
          Arun Suresh added a comment -

          Updating patch with latest feedback comments..

          Show
          Arun Suresh added a comment - Updating patch with latest feedback comments..
          Hide
          Alejandro Abdelnur added a comment -

          almost there, a few minor nits, +1 after that:

          index.apt.vm:

          • The URL for decrypting has iv&material, it should not

          KeyProviderCryptoExtension.java:

          • warmUpEncryptedKeys() javadoc has param 'keVersions', method param name is 'keyNames'

          KMS.java:

          • getKeyVersion() false change, don’t fix indentation in code not affected by the patch

          KMSWebApps.java:

          • the getKeyProvider() should return a KeyProviderCryptoExtension instance, then we don’t have to create a new extension on every request (KMS instance is create per request per JAX-RPC spec)

          KMSClientProvider.java:

          • generateEncryptedKey() funny indentation on the last exception in the method declaration

          KMSRESTConstants.java:

          • move IV_FIELD constant down with other _FIELD constants.
          • ENCRYPTED_KEY_VERSION should be ENCRYPTED_KEY_VERSION_FIELD
          Show
          Alejandro Abdelnur added a comment - almost there, a few minor nits, +1 after that: index.apt.vm : The URL for decrypting has iv&material, it should not KeyProviderCryptoExtension.java : warmUpEncryptedKeys() javadoc has param 'keVersions', method param name is 'keyNames' KMS.java : getKeyVersion() false change, don’t fix indentation in code not affected by the patch KMSWebApps.java : the getKeyProvider() should return a KeyProviderCryptoExtension instance, then we don’t have to create a new extension on every request (KMS instance is create per request per JAX-RPC spec) KMSClientProvider.java : generateEncryptedKey() funny indentation on the last exception in the method declaration KMSRESTConstants.java : move IV_FIELD constant down with other _FIELD constants. ENCRYPTED_KEY_VERSION should be ENCRYPTED_KEY_VERSION_FIELD
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12656576/HADOOP-10720-10750.COMBO.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.
          See https://builds.apache.org/job/PreCommit-HADOOP-Build/4318//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

          org.apache.hadoop.ipc.TestIPC
          org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem
          org.apache.hadoop.fs.TestSymlinkLocalFSFileContext

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4318//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4318//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12656576/HADOOP-10720-10750.COMBO.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 javadoc . The javadoc tool appears to have generated 1 warning messages. See https://builds.apache.org/job/PreCommit-HADOOP-Build/4318//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms: org.apache.hadoop.ipc.TestIPC org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem org.apache.hadoop.fs.TestSymlinkLocalFSFileContext +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4318//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4318//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Uploading combo patch with 10750

          Show
          Arun Suresh added a comment - Uploading combo patch with 10750
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12656574/HADOOP-10720.13.patch
          against trunk revision .

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4317//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12656574/HADOOP-10720.13.patch against trunk revision . -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4317//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Updated patch with feedback, Thanks Alejandro Abdelnur
          Also rebase with HADOOP-10750

          Show
          Arun Suresh added a comment - Updated patch with feedback, Thanks Alejandro Abdelnur Also rebase with HADOOP-10750
          Hide
          Alejandro Abdelnur added a comment -

          Arun Suresh, I like much better the wrapper approach, much cleaner. A few minor follow ups:

          CommonConfigurationKeysPublic.java:

          • constants values should be .encrypted.key. instead of .key., similarly the constant names. update doc as well.

          KMS.java:

          • unused imports
          • cache constants/defaults here are not used, remove
          • cryptoExtension assignment has funny indentation
          • EEK_GENERATE op call should be a HTTP GET
          • EEK_DECRYPT op should be a HTTTP POST with the eek as JSON payload

          KMSWebApp.java:

          • unused imports

          EncryptedKeyCachingProvider.java:

          • a better name would be EncryptedKeyPreGeneratorKeyProvider, not to confuse with the cache provider that is coming.
          Show
          Alejandro Abdelnur added a comment - Arun Suresh , I like much better the wrapper approach, much cleaner. A few minor follow ups: CommonConfigurationKeysPublic.java : constants values should be .encrypted.key. instead of .key. , similarly the constant names. update doc as well. KMS.java : unused imports cache constants/defaults here are not used, remove cryptoExtension assignment has funny indentation EEK_GENERATE op call should be a HTTP GET EEK_DECRYPT op should be a HTTTP POST with the eek as JSON payload KMSWebApp.java : unused imports EncryptedKeyCachingProvider.java: a better name would be EncryptedKeyPreGeneratorKeyProvider , not to confuse with the cache provider that is coming.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12656554/HADOOP-10720.12.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.
          See https://builds.apache.org/job/PreCommit-HADOOP-Build/4315//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

          org.apache.hadoop.ipc.TestIPC
          org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem
          org.apache.hadoop.fs.TestSymlinkLocalFSFileContext

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4315//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4315//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12656554/HADOOP-10720.12.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 javadoc . The javadoc tool appears to have generated 1 warning messages. See https://builds.apache.org/job/PreCommit-HADOOP-Build/4315//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms: org.apache.hadoop.ipc.TestIPC org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem org.apache.hadoop.fs.TestSymlinkLocalFSFileContext +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4315//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4315//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Rebasing with HADOOP-10841 and HADOOP-10842

          Show
          Arun Suresh added a comment - Rebasing with HADOOP-10841 and HADOOP-10842
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12655994/HADOOP-10720.11.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

          org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
          org.apache.hadoop.ipc.TestIPC
          org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4288//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4288//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12655994/HADOOP-10720.11.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms: org.apache.hadoop.fs.TestSymlinkLocalFSFileContext org.apache.hadoop.ipc.TestIPC org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4288//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4288//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12655982/HADOOP-10720.10.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.
          See https://builds.apache.org/job/PreCommit-HADOOP-Build/4287//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

          org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
          org.apache.hadoop.ipc.TestIPC
          org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4287//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4287//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12655982/HADOOP-10720.10.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 javadoc . The javadoc tool appears to have generated 1 warning messages. See https://builds.apache.org/job/PreCommit-HADOOP-Build/4287//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms: org.apache.hadoop.fs.TestSymlinkLocalFSFileContext org.apache.hadoop.ipc.TestIPC org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4287//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4287//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Updated patch with feedback, thanks Alejandro Abdelnur

          Show
          Arun Suresh added a comment - Updated patch with feedback, thanks Alejandro Abdelnur
          Hide
          Arun Suresh added a comment -

          So, initValue is actually the number of values that are synchronously pushed into a key's queue, when it is first created (when Cache's load method is called / or when a warmup is requested) and lowWatermark is threshold below which for an existing key's Queue, will be asynchronously filled. I have 2 constructors, one that takes an numInitValues (which I used on the server) and one that assumes numInitValues = numValues (which I used for the client). My assumption was that the Server would cache a lot more values of a key, so initValues can be less than numValues.

          A SyncGenerationPolicy.ATLEAST_ONE / INIT_VALUE / LOW_WATERMARK / ALL policy just states that when a client calls the getAtMost, at least 1 / initValue / lowWaterMark / n entries are returned synchronously.

          I guess the confusion was due to the placement of the comments. Also : yes, I think I should remove the INIT_VALUE policy and constructor parameter all together and assume the initializationValue for a Queue is lowWatermark.

          Show
          Arun Suresh added a comment - So, initValue is actually the number of values that are synchronously pushed into a key's queue, when it is first created (when Cache's load method is called / or when a warmup is requested) and lowWatermark is threshold below which for an existing key's Queue, will be asynchronously filled. I have 2 constructors, one that takes an numInitValues (which I used on the server) and one that assumes numInitValues = numValues (which I used for the client). My assumption was that the Server would cache a lot more values of a key, so initValues can be less than numValues. A SyncGenerationPolicy.ATLEAST_ONE / INIT_VALUE / LOW_WATERMARK / ALL policy just states that when a client calls the getAtMost , at least 1 / initValue / lowWaterMark / n entries are returned synchronously. I guess the confusion was due to the placement of the comments. Also : yes, I think I should remove the INIT_VALUE policy and constructor parameter all together and assume the initializationValue for a Queue is lowWatermark .
          Hide
          Alejandro Abdelnur added a comment -

          A few final comments, then we are good:

          KMS.java:

          • encKeyVersionQueue is created with LOW_WATERMARK but comment states that INIT_VALUE is used, it seems it should be INIT_VALUE

          ValueQueue.java:

          • there is a constructor that is not being used
          • are we using all the defined policies?, it seems we are using LOW_WATERMARK on the server side (when it should be INIT_VALUE, see above), and on the client side it does not matter. If so, why not just handle those 2 cases only.
          • getAtMost() should use a switch/case instead if/else/… for the policy
          Show
          Alejandro Abdelnur added a comment - A few final comments, then we are good: KMS.java : encKeyVersionQueue is created with LOW_WATERMARK but comment states that INIT_VALUE is used, it seems it should be INIT_VALUE ValueQueue.java : there is a constructor that is not being used are we using all the defined policies?, it seems we are using LOW_WATERMARK on the server side (when it should be INIT_VALUE , see above), and on the client side it does not matter. If so, why not just handle those 2 cases only. getAtMost() should use a switch/case instead if/else/… for the policy
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12655860/HADOOP-10720.9.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

          org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
          org.apache.hadoop.ipc.TestIPC
          org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4281//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4281//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12655860/HADOOP-10720.9.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms: org.apache.hadoop.fs.TestSymlinkLocalFSFileContext org.apache.hadoop.ipc.TestIPC org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4281//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4281//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Updated Patch with some doc changes..

          Show
          Arun Suresh added a comment - Updated Patch with some doc changes..
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12655638/HADOOP-10720.8.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

          org.apache.hadoop.fs.shell.TestCopyPreserveFlag
          org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
          org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
          org.apache.hadoop.fs.shell.TestTextCommand
          org.apache.hadoop.ipc.TestIPC
          org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem
          org.apache.hadoop.fs.shell.TestPathData
          org.apache.hadoop.fs.TestDFVariations

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4268//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4268//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12655638/HADOOP-10720.8.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms: org.apache.hadoop.fs.shell.TestCopyPreserveFlag org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl org.apache.hadoop.fs.TestSymlinkLocalFSFileContext org.apache.hadoop.fs.shell.TestTextCommand org.apache.hadoop.ipc.TestIPC org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem org.apache.hadoop.fs.shell.TestPathData org.apache.hadoop.fs.TestDFVariations +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4268//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4268//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Updated patch :

          • Modified ValueQueue to handle race condition when multiple tasks to refill Queue for a single key is submitted and ensure only 1 task is queued.
          Show
          Arun Suresh added a comment - Updated patch : Modified ValueQueue to handle race condition when multiple tasks to refill Queue for a single key is submitted and ensure only 1 task is queued.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12655355/HADOOP-10720.7.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

          org.apache.hadoop.ipc.TestIPC
          org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem
          org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
          org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4257//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4257//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12655355/HADOOP-10720.7.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms: org.apache.hadoop.ipc.TestIPC org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem org.apache.hadoop.fs.TestSymlinkLocalFSFileContext org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4257//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4257//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Updating patch addressing review comments.

          Show
          Arun Suresh added a comment - Updating patch addressing review comments.
          Hide
          Alejandro Abdelnur added a comment -

          CommonConfigurationKeysPublic.java:

          • 80 char rule not being observed by the patch

          kms-acls.xml:

          • it has 2 </configuration> elements at the end, should be one.

          KMS.java:

          • unused import for ExecutionException
          • white space changes
          • 80 char rule not being observed by the patch
          • generateEncryptedKeys(), assert for numKeys > 0
          • we should have 2 new meters, one for generateEEK calls and other for decryptEEK calls instead using the getKeyCallsMeter() meter for both.

          KMSClientProvider.java:

          • white space changes
          • unused import for SyncGenerationPolicy

          TestKMS.java:

          • white space changes
          • 80 char rule not being observed by the patch

          TetValueQueue.java:

          • missing license header

          ValueQueue:

          • Instead having a periodic check for below watermark, wouldn’t be more efficient to check if below the watermark after getting an EEK and if so schedule an async filling? we should just take care of not scheduling additional fillings while one is scheduled/in-progress.

          KeyProviderCryptoExtension.java:

          • it does not expose the method warmupEncryptedKeys which it should be wired, in the case of the KMS client, to ValueQueue
          Show
          Alejandro Abdelnur added a comment - CommonConfigurationKeysPublic.java : 80 char rule not being observed by the patch kms-acls.xml : it has 2 </configuration> elements at the end, should be one. KMS.java : unused import for ExecutionException white space changes 80 char rule not being observed by the patch generateEncryptedKeys() , assert for numKeys > 0 we should have 2 new meters, one for generateEEK calls and other for decryptEEK calls instead using the getKeyCallsMeter() meter for both. KMSClientProvider.java : white space changes unused import for SyncGenerationPolicy TestKMS.java : white space changes 80 char rule not being observed by the patch TetValueQueue.java : missing license header ValueQueue : Instead having a periodic check for below watermark, wouldn’t be more efficient to check if below the watermark after getting an EEK and if so schedule an async filling? we should just take care of not scheduling additional fillings while one is scheduled/in-progress. KeyProviderCryptoExtension.java : it does not expose the method warmupEncryptedKeys which it should be wired, in the case of the KMS client, to ValueQueue
          Hide
          Alejandro Abdelnur added a comment -

          a better name, no reflecting the actual impl would be. warmUpEncryptedEncryptionKeys().

          Show
          Alejandro Abdelnur added a comment - a better name, no reflecting the actual impl would be. warmUpEncryptedEncryptionKeys() .
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12654964/HADOOP-10720.6.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          -1 release audit. The applied patch generated 1 release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

          org.apache.hadoop.ipc.TestIPC
          org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem
          org.apache.hadoop.fs.TestSymlinkLocalFSFileContext

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4247//testReport/
          Release audit warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/4247//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4247//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12654964/HADOOP-10720.6.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. -1 release audit . The applied patch generated 1 release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms: org.apache.hadoop.ipc.TestIPC org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem org.apache.hadoop.fs.TestSymlinkLocalFSFileContext +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4247//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/4247//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4247//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Updated Patch.

          • as per Alejandro Abdelnur's and Andrew Wang's suggestion, the actual number of values returned when the client calls getAtMost is now policy driven (can be 1, some init value, equal to low watermark OR all values)
          • Added an initializeQueueForKeys method which can be called to warm up the keys.
          • Added more testcases for ValueQueue
          Show
          Arun Suresh added a comment - Updated Patch. as per Alejandro Abdelnur 's and Andrew Wang 's suggestion, the actual number of values returned when the client calls getAtMost is now policy driven (can be 1, some init value, equal to low watermark OR all values) Added an initializeQueueForKeys method which can be called to warm up the keys. Added more testcases for ValueQueue
          Hide
          Alejandro Abdelnur added a comment -

          in the case the queue is empty on the server side, instead returning just one i would return 100 because of the following reasons:

          • cipher init takes about 10microsec, regardless of jave or openssl (in my laptop with an i7 2.3Ghz)
          • rest call can be assumed to be around 2millsecs
          • most likely file creation cones in bursts.

          generating 100 edek would take millisec, adding to the 2millisec of the rest call. seems a good tradeoff to be able to handle an initial burst efficiently until the queues ate warm.

          also, we could have an additional method in the CE to warm up ez edek queues. the NN could call this at startup time by looping through the list of ezs. and when a new ez is created.

          Show
          Alejandro Abdelnur added a comment - in the case the queue is empty on the server side, instead returning just one i would return 100 because of the following reasons: • cipher init takes about 10microsec, regardless of jave or openssl (in my laptop with an i7 2.3Ghz) • rest call can be assumed to be around 2millsecs • most likely file creation cones in bursts. generating 100 edek would take millisec, adding to the 2millisec of the rest call. seems a good tradeoff to be able to handle an initial burst efficiently until the queues ate warm. also, we could have an additional method in the CE to warm up ez edek queues. the NN could call this at startup time by looping through the list of ezs. and when a new ez is created.
          Hide
          Arun Suresh added a comment -

          Attaching updated patch.

          Alejandro Abdelnur, Andrew Wang, thanks for the feedback.. think I addressed all your comments. Couple of things though :

          prefer to use ArrayLists rather than LinkedLists

          Agreed, but since the fill method takes a Queue and since LinkedList implements Queue and not ArrayList, I feel we should stick to LLs.

          is using poll() instead, continuing to trigger refills while the return value..

          yup that would be cool.. thanks !!

          It looks like client requests will only will refill a single key. It seems ..

          So, this is how it is now : if Queue doesn't exist for key (or if key has been dropped), the next call from client will be a synchronous call to fill numInitValues which can be different from numValues.

          Now, If the client requests n values and there are < n in the queue, ONLY those values will be drained via poll() in the synchronous call and sent to client which is why I named the method getAtMost. My thinking is that on the server side, if the cache size is large, and if the client gets only 1 Key from a request, and if the client requires more, it can fire a second req.. and there is a high probability that the next request from the client for the same Q would return the complete batch (we can tune the refill task to be more aggressive based on e2e testing). Also returning only 1 will kind of guarantee that the server, if bombarded by concurrent requests won't konk off.

          Show
          Arun Suresh added a comment - Attaching updated patch. Alejandro Abdelnur , Andrew Wang , thanks for the feedback.. think I addressed all your comments. Couple of things though : prefer to use ArrayLists rather than LinkedLists Agreed, but since the fill method takes a Queue and since LinkedList implements Queue and not ArrayList , I feel we should stick to LLs. is using poll() instead, continuing to trigger refills while the return value.. yup that would be cool.. thanks !! It looks like client requests will only will refill a single key. It seems .. So, this is how it is now : if Queue doesn't exist for key (or if key has been dropped), the next call from client will be a synchronous call to fill numInitValues which can be different from numValues . Now, If the client requests n values and there are < n in the queue, ONLY those values will be drained via poll() in the synchronous call and sent to client which is why I named the method getAtMost . My thinking is that on the server side, if the cache size is large, and if the client gets only 1 Key from a request, and if the client requires more, it can fire a second req.. and there is a high probability that the next request from the client for the same Q would return the complete batch (we can tune the refill task to be more aggressive based on e2e testing). Also returning only 1 will kind of guarantee that the server, if bombarded by concurrent requests won't konk off.
          Hide
          Andrew Wang added a comment -

          And a few more things that floated across my head:

          • Should use a ThreadFactoryBuilder to name the fillerThreads and make them daemons
          • Would be good to Precondition check that the config values are >=0 or >=1 as appropriate
          • We're getting to the point where we might want a Conf object like NNConf or DNConf, up to you
          Show
          Andrew Wang added a comment - And a few more things that floated across my head: Should use a ThreadFactoryBuilder to name the fillerThreads and make them daemons Would be good to Precondition check that the config values are >=0 or >=1 as appropriate We're getting to the point where we might want a Conf object like NNConf or DNConf, up to you
          Hide
          Andrew Wang added a comment -

          One more thing I just thought of: it'd be nice to check that the client-side timeout works for these new methods too. Ideally we see get some kind of exception saying that we couldn't reach the KMS.

          Show
          Andrew Wang added a comment - One more thing I just thought of: it'd be nice to check that the client-side timeout works for these new methods too. Ideally we see get some kind of exception saying that we couldn't reach the KMS.
          Hide
          Andrew Wang added a comment -

          Hey Arun, did a quick skim of this patch, few comments:

          • typos: "IOExeption", "atleast", "funciton", "Defalt"
          • New config keys should be added/documented in core-default.xml. Can c+p the reference to core-default.xml for the javadocs in CommonConfigurationKeys.java.
          • Let's also put the time unit into the config key name, e.g. appending _MS
          • Could we rename the config params to say "refill" rather than just "fill"?
          • I'd prefer to use ArrayLists rather than LinkedLists if we already know the size upfront. Should be more efficient.
          • It looks like the keyQueueFiller could race with an ongoing refill, so it refills above the desired size.
          • Tucu might have already asked this (didn't quite follow the above discussion), but we want to make sure that there isn't a race such that a client triggers a refill, a bunch of others come in and take all the keys, and then the first client blocks on keyQueue.take(). One solution is using poll() instead, continuing to trigger refills while the return value is null.
          • It looks like client requests will only will refill a single key. It seems like we should request a batch of keys, assuming that network overhead is the dominant cost here. I don't know how costly key generation is, but filling up all the way to the threshold or at least to the lowWatermark would be simple policies.
          • Would also prefer lowWatermark rather than lowWaterMark, since "watermark" is a single word.

          Thanks for working on this! I'll give a more thorough review with your next patch rev.

          Show
          Andrew Wang added a comment - Hey Arun, did a quick skim of this patch, few comments: typos: "IOExeption", "atleast", "funciton", "Defalt" New config keys should be added/documented in core-default.xml. Can c+p the reference to core-default.xml for the javadocs in CommonConfigurationKeys.java. Let's also put the time unit into the config key name, e.g. appending _MS Could we rename the config params to say "refill" rather than just "fill"? I'd prefer to use ArrayLists rather than LinkedLists if we already know the size upfront. Should be more efficient. It looks like the keyQueueFiller could race with an ongoing refill, so it refills above the desired size. Tucu might have already asked this (didn't quite follow the above discussion), but we want to make sure that there isn't a race such that a client triggers a refill, a bunch of others come in and take all the keys, and then the first client blocks on keyQueue.take() . One solution is using poll() instead, continuing to trigger refills while the return value is null. It looks like client requests will only will refill a single key. It seems like we should request a batch of keys, assuming that network overhead is the dominant cost here. I don't know how costly key generation is, but filling up all the way to the threshold or at least to the lowWatermark would be simple policies. Would also prefer lowWatermark rather than lowWaterMark , since "watermark" is a single word. Thanks for working on this! I'll give a more thorough review with your next patch rev.
          Hide
          Alejandro Abdelnur added a comment -

          Also, I felt having EncryptedKeyVersion in CryptoExtension makes more sense since this will allow different implementations of CE to have its own EncKeyVersion without forcing it to be a subclass of KPCE.

          We are following this pattern already in other KeyProvider classes, so I would go for requiring subclassing.

          getAtLeast() rename to getAtMost()

          sure.

          Show
          Alejandro Abdelnur added a comment - Also, I felt having EncryptedKeyVersion in CryptoExtension makes more sense since this will allow different implementations of CE to have its own EncKeyVersion without forcing it to be a subclass of KPCE. We are following this pattern already in other KeyProvider classes, so I would go for requiring subclassing. getAtLeast() rename to getAtMost() sure.
          Hide
          Arun Suresh added a comment -

          Alejandro Abdelnur, thanks for the feedback. Will upload a patch shortly addressing these. Couplo things though :

          why are we moving EncryptedKeyVersion inside of CryptoExtension?

          So the KMSClientProvider is an instance of CryptoExtension, not KeyProviderCryptoExtension. Thus since the EncryptedKeyVersion constructor is protected, it will not be assessable to KMSClientProvider to subclass unless it part CryptoExtension.
          Also, I felt having EncryptedKeyVersion in CryptoExtension makes more sense since this will allow different implementations of CE to have its own EncKeyVersion without forcing it to be a subclass of KPCE.

          getAtLeast(), if queue is empty should trigger async queue filling and fill 1 value synchronous to avoid stealing from other request.

          Sure, but I put the getAtLeast() to enforce the contract that the client requires at-least 'n' keys from the call.. Maybe I could change the signature to getAtMost() ? and return 1 if Queue is empty ?

          Show
          Arun Suresh added a comment - Alejandro Abdelnur , thanks for the feedback. Will upload a patch shortly addressing these. Couplo things though : why are we moving EncryptedKeyVersion inside of CryptoExtension? So the KMSClientProvider is an instance of CryptoExtension , not KeyProviderCryptoExtension . Thus since the EncryptedKeyVersion constructor is protected, it will not be assessable to KMSClientProvider to subclass unless it part CryptoExtension . Also, I felt having EncryptedKeyVersion in CryptoExtension makes more sense since this will allow different implementations of CE to have its own EncKeyVersion without forcing it to be a subclass of KPCE. getAtLeast(), if queue is empty should trigger async queue filling and fill 1 value synchronous to avoid stealing from other request. Sure, but I put the getAtLeast() to enforce the contract that the client requires at-least 'n' keys from the call.. Maybe I could change the signature to getAtMost() ? and return 1 if Queue is empty ?
          Hide
          Alejandro Abdelnur added a comment -

          CommonConfigurationKeysPublic.java:

          • there is a KMS constant in KMSClientProvider, with the prefix constant defined there. we should bring that here (may be a simple JIRA to be done before this one).
          • Javadocs typo 'Defalt' should be 'Default'.
          • Default cache size should be a bit higher, I’d say 500.
          • Indicate the expected time unit of the keys in the javadocs and the state the default value with the closest unit, i.e 43200000 is 12hrs.
          • Have a preceding comment <!--- KMSClientProvider configurations —>

          We should have 2 different ACLs, one for generate and one for decrypt.

          KeyProviderCryptoExtension.java:

          • why are we moving EncryptedKeyVersion inside of CryptoExtension?

          KMS.java:

          • I’d make ServerSideEncKeyVersionQueue and outer class.
          • In the fillQueueForKey() method, the parameters to the generateEncryptedKey() in the loop, can be done outside of the loop once.
          • There are a few whitepace changes.
          • Indentation of added code is funny and beyond 80 chars.
          • DECRYPT should be a GET. both ENCRYPT and DECRYPT REST calls have unused params. also, i could all the REST methods decryptEncryptionKey & generateEncryptionKeys(). The doEncryptionOp() could be fold into the generateEncryptionKeys().

          KMSClientProvider.java:

          • don’t use wildcard imports (and preferably no static imports)
          • white space changes
          • parseJSONEncKeyVersion() should throw an IOException if any value is null, all are required values.

          KMSRESTConstants.java:

          • EDEK_NUM_KEYS should be EEK_NUM_KEYS
          • KEY_NAME_FIELD constant seems unused

          KMSServerJSONUtils:

          • white space changes

          KMSWebApp.java:

          • the change in this class is not correct

          TestKMS.java:

          • white space changes

          ValueQueue.java:

          • getAtleast() should be getAtLeast()
          • getAtLeast(), if queue is empty should trigger async queue filling and fill 1 value synchronous to avoid stealing from other request.
          • Also, it seems we could make this class concrete and have an inner Fetcher interface that is provided in the constructor.
          Show
          Alejandro Abdelnur added a comment - CommonConfigurationKeysPublic.java: there is a KMS constant in KMSClientProvider, with the prefix constant defined there. we should bring that here (may be a simple JIRA to be done before this one). Javadocs typo 'Defalt' should be 'Default'. Default cache size should be a bit higher, I’d say 500. Indicate the expected time unit of the keys in the javadocs and the state the default value with the closest unit, i.e 43200000 is 12hrs. Have a preceding comment <!--- KMSClientProvider configurations —> We should have 2 different ACLs, one for generate and one for decrypt. KeyProviderCryptoExtension.java: why are we moving EncryptedKeyVersion inside of CryptoExtension ? KMS.java: I’d make ServerSideEncKeyVersionQueue and outer class. In the fillQueueForKey() method, the parameters to the generateEncryptedKey() in the loop, can be done outside of the loop once. There are a few whitepace changes. Indentation of added code is funny and beyond 80 chars. DECRYPT should be a GET. both ENCRYPT and DECRYPT REST calls have unused params. also, i could all the REST methods decryptEncryptionKey & generateEncryptionKeys() . The doEncryptionOp() could be fold into the generateEncryptionKeys() . KMSClientProvider.java: don’t use wildcard imports (and preferably no static imports) white space changes parseJSONEncKeyVersion() should throw an IOException if any value is null, all are required values. KMSRESTConstants.java: EDEK_NUM_KEYS should be EEK_NUM_KEYS KEY_NAME_FIELD constant seems unused KMSServerJSONUtils: white space changes KMSWebApp.java: the change in this class is not correct TestKMS.java: white space changes ValueQueue.java: getAtleast() should be getAtLeast() getAtLeast() , if queue is empty should trigger async queue filling and fill 1 value synchronous to avoid stealing from other request. Also, it seems we could make this class concrete and have an inner Fetcher interface that is provided in the constructor.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12654236/HADOOP-10720.4.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4219//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4219//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12654236/HADOOP-10720.4.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4219//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4219//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Modified patch : Fixed the javac warnings and refactored a bit

          Show
          Arun Suresh added a comment - Modified patch : Fixed the javac warnings and refactored a bit
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12654202/HADOOP-10720.3.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          -1 javac. The applied patch generated 1261 javac compiler warnings (more than the trunk's current 1260 warnings).

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4218//testReport/
          Javac warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/4218//artifact/trunk/patchprocess/diffJavacWarnings.txt
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4218//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12654202/HADOOP-10720.3.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. -1 javac . The applied patch generated 1261 javac compiler warnings (more than the trunk's current 1260 warnings). +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4218//testReport/ Javac warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/4218//artifact/trunk/patchprocess/diffJavacWarnings.txt Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4218//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Updating patch.. fixed up test-cases and added javadocs

          Show
          Arun Suresh added a comment - Updating patch.. fixed up test-cases and added javadocs
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12653792/HADOOP-10720.2.patch
          against trunk revision .

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4205//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12653792/HADOOP-10720.2.patch against trunk revision . -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4205//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Alejandro Abdelnur, Updated with your suggestions. Thanks for the pointer on using Guava Cache instead of ConcurrentHashMap.

          KMSClientProvider.java, keyQueueFiller runnable, the for loop should clone the keyQueues.entry(), even if the Map is a concurrent one.

          cloning the Entry object does not seem to be possible, so I made a copy of the Map before iterating. If the number of keys are really large, this might take a slight perf hit I guess..

          Show
          Arun Suresh added a comment - Alejandro Abdelnur , Updated with your suggestions. Thanks for the pointer on using Guava Cache instead of ConcurrentHashMap . KMSClientProvider.java, keyQueueFiller runnable, the for loop should clone the keyQueues.entry(), even if the Map is a concurrent one. cloning the Entry object does not seem to be possible, so I made a copy of the Map before iterating. If the number of keys are really large, this might take a slight perf hit I guess..
          Hide
          Alejandro Abdelnur added a comment -

          I've done a quick scan to the patch, it is in the right direction, a few things I've noticed:

          • KMSClientProvider.java imports, your IDE is rearranging them making false changes
          • KMSClientProvider.java, keyQueueFiller runnable, the for loop should clone the keyQueues.entry(), even if the Map is a concurrent one.
          • KMSClientProvider.java, using a concurrent Map and the putIfAbsent() method results a LinkedBlockingQueue instance being created on every call. Instead I would use a Guava Cache, and the benefit is that you get eviction policies for free (we should discard a queue is not use for X amount of time).
          • KMS sever side, I think on the server side we should do identical caching to minimize the REST API calls time. It should be possible to have a common class with the caching logic and watermark triggers to be used on both sides.
          Show
          Alejandro Abdelnur added a comment - I've done a quick scan to the patch, it is in the right direction, a few things I've noticed: KMSClientProvider.java imports, your IDE is rearranging them making false changes KMSClientProvider.java, keyQueueFiller runnable, the for loop should clone the keyQueues.entry() , even if the Map is a concurrent one. KMSClientProvider.java, using a concurrent Map and the putIfAbsent() method results a LinkedBlockingQueue instance being created on every call. Instead I would use a Guava Cache, and the benefit is that you get eviction policies for free (we should discard a queue is not use for X amount of time). KMS sever side, I think on the server side we should do identical caching to minimize the REST API calls time. It should be possible to have a common class with the caching logic and watermark triggers to be used on both sides.
          Hide
          Arun Suresh added a comment -

          Alejandro Abdelnur, Andrew Wang,
          Uploading initial patch based on your suggestions for review

          This patch depends on the latest patch from HADOOP-10719
          Once we finalize on that, I shall modify this..

          Show
          Arun Suresh added a comment - Alejandro Abdelnur , Andrew Wang , Uploading initial patch based on your suggestions for review This patch depends on the latest patch from HADOOP-10719 Once we finalize on that, I shall modify this..
          Hide
          Andrew Wang added a comment -

          Great, sounds good. Looking forward to the patch!

          Show
          Andrew Wang added a comment - Great, sounds good. Looking forward to the patch!
          Hide
          Alejandro Abdelnur added a comment -

          On generateEDEK API being batch?

          I wouldn't make it batch, that is a client/server implementation detail. being batch would complicate things for clients as clients would have to hold on to N values and then use them as needed. I'd say let the KMS client do that holding entirely, and the NN asks one at the time.

          On how watermarks are determined?

          First, I would start simple, fixed by configuration, then i would make them variable based on demand.

          On punting the caching and batch to a follow up jira?

          sure, we could have the KMS client/server batch REST API for now and the client having a queue, which gets refilled when depleted. later we add the refill daemons on both ends and the queue on the server side. by doing this we don't have to change the REST API for the optimizations.

          Show
          Alejandro Abdelnur added a comment - On generateEDEK API being batch? I wouldn't make it batch, that is a client/server implementation detail. being batch would complicate things for clients as clients would have to hold on to N values and then use them as needed. I'd say let the KMS client do that holding entirely, and the NN asks one at the time. On how watermarks are determined? First, I would start simple, fixed by configuration, then i would make them variable based on demand. On punting the caching and batch to a follow up jira? sure, we could have the KMS client/server batch REST API for now and the client having a queue, which gets refilled when depleted. later we add the refill daemons on both ends and the queue on the server side. by doing this we don't have to change the REST API for the optimizations.
          Hide
          Andrew Wang added a comment -

          This sounds really good Tucu. We're still going to have some complexity on the NN since there's still the possibility of a synchronous call, but having the pre-populated cache on the KMSClientProvider will save us having to do it in the NN.

          Few q's and comments:

          • Are you planning to make the generateEDEK API batch? Would be more efficient at filling the queue.
          • How are the watermarks determined? The rate of generation will differ greatly between EZs, ideally we have enough to handle bursts of write traffic (i.e. MR output)
          • This client caching and batch stuff could also be punted to a later JIRA, I'm interested in seeing this committed soon so we can start using it on the fs-encryption branch.
          Show
          Andrew Wang added a comment - This sounds really good Tucu. We're still going to have some complexity on the NN since there's still the possibility of a synchronous call, but having the pre-populated cache on the KMSClientProvider will save us having to do it in the NN. Few q's and comments: Are you planning to make the generateEDEK API batch? Would be more efficient at filling the queue. How are the watermarks determined? The rate of generation will differ greatly between EZs, ideally we have enough to handle bursts of write traffic (i.e. MR output) This client caching and batch stuff could also be punted to a later JIRA, I'm interested in seeing this committed soon so we can start using it on the fs-encryption branch.
          Hide
          Alejandro Abdelnur added a comment -

          further details on how we could optimizing the extension operations in the KMS (server/client) to reduce/eliminate calls from the NN on file create():

          The KMSClientProvider would implement the CryptoExtensions interface, thus the KeyProviderExtensions would delegate to it for generating and decrypting EDEKs.

          The KMS client will have a queue with IV&EDEK per key. If the client queue is new/empty a call to the KMS asking for N IV&EDEKs for the key will be send, the KMS response will have at least one IV&EDEK. the returned IV&EDEKs are put int he queue and the first entry from the queue is returned to the KMS client caller. The KMS client will have a daemon thread that keeps requesting IV&EDEKs to the KMS for a key using a low and a high marks to start and stop requesting more IV&EDEKs to the KMS. This means that most of the time the KMS client will have IV&EDEKs locally cached for the NN to use.

          On the KMS side a the same queue and replenisher daemon thread will have another cache of IV&EDEK. On KMS client requests, typically, IV&EDEKS will not be generated synchronously but simply drained from the queue. if the queue is new/empty a small number of IV&EDEKs will be generated synchronously and returned to the caller. if the queue is not empty and the client request N but the queue has less than N, the available elements will be returned and the daemon thread will start producing more up to the high mark.

          This dual cache will significantly reduce NN->KMS calls and reduce/eliminate the calls that happen synchronously with NN create() call.

          Suggestion: look at the JDK ThreadPoolExecutor uses a similar logic using wait/notify thus not sleeping/looping-around.

          Show
          Alejandro Abdelnur added a comment - further details on how we could optimizing the extension operations in the KMS (server/client) to reduce/eliminate calls from the NN on file create(): The KMSClientProvider would implement the CryptoExtensions interface, thus the KeyProviderExtensions would delegate to it for generating and decrypting EDEKs. The KMS client will have a queue with IV&EDEK per key. If the client queue is new/empty a call to the KMS asking for N IV&EDEKs for the key will be send, the KMS response will have at least one IV&EDEK. the returned IV&EDEKs are put int he queue and the first entry from the queue is returned to the KMS client caller. The KMS client will have a daemon thread that keeps requesting IV&EDEKs to the KMS for a key using a low and a high marks to start and stop requesting more IV&EDEKs to the KMS. This means that most of the time the KMS client will have IV&EDEKs locally cached for the NN to use. On the KMS side a the same queue and replenisher daemon thread will have another cache of IV&EDEK. On KMS client requests, typically, IV&EDEKS will not be generated synchronously but simply drained from the queue. if the queue is new/empty a small number of IV&EDEKs will be generated synchronously and returned to the caller. if the queue is not empty and the client request N but the queue has less than N, the available elements will be returned and the daemon thread will start producing more up to the high mark. This dual cache will significantly reduce NN->KMS calls and reduce/eliminate the calls that happen synchronously with NN create() call. Suggestion: look at the JDK ThreadPoolExecutor uses a similar logic using wait/notify thus not sleeping/looping-around.
          Hide
          Alejandro Abdelnur added a comment -

          The KMS should have separate ACLs for the extension operations.

          Show
          Alejandro Abdelnur added a comment - The KMS should have separate ACLs for the extension operations.
          Hide
          Alejandro Abdelnur added a comment -

          With the change of direction in HADOOP-10719, the KMSClientProvider would implement the CryptoExtension interface and the KMS would provide a REST API for such. The REST API should support batch requests and the KMSClientProvider should be able to park the received EDEKs until they are requested.

          Show
          Alejandro Abdelnur added a comment - With the change of direction in HADOOP-10719 , the KMSClientProvider would implement the CryptoExtension interface and the KMS would provide a REST API for such. The REST API should support batch requests and the KMSClientProvider should be able to park the received EDEKs until they are requested.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12651610/COMBO.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4124//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4124//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12651610/COMBO.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4124//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4124//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment -

          new combo for testpatch.

          Show
          Alejandro Abdelnur added a comment - new combo for testpatch.
          Hide
          Alejandro Abdelnur added a comment -

          new patch synch-ing up with HADOOP-10719.

          Show
          Alejandro Abdelnur added a comment - new patch synch-ing up with HADOOP-10719 .
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12651562/COMBO.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4117//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4117//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12651562/COMBO.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4117//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4117//console This message is automatically generated.
          Hide
          Andrew Wang added a comment -

          +1 pending HADOOP-10719, thanks tucu

          Show
          Andrew Wang added a comment - +1 pending HADOOP-10719 , thanks tucu
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12651545/COMBO.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4112//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4112//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12651545/COMBO.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4112//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4112//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment -

          updated patch based on changes in HADOOP-10719.

          Show
          Alejandro Abdelnur added a comment - updated patch based on changes in HADOOP-10719 .
          Hide
          Andrew Wang added a comment -

          Okay, after seeing this my comment on HADOOP-10719 about making the new KP methods abstract is somewhat answered. It'd still be somewhat better to take a composition-based approach (e.g. make it a helper method that subclasses could call), but it looks good pending resolution of things in HADOOP-10719.

          Thanks Tucu!

          Show
          Andrew Wang added a comment - Okay, after seeing this my comment on HADOOP-10719 about making the new KP methods abstract is somewhat answered. It'd still be somewhat better to take a composition-based approach (e.g. make it a helper method that subclasses could call), but it looks good pending resolution of things in HADOOP-10719 . Thanks Tucu!
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12651358/COMBO.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4105//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4105//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12651358/COMBO.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4105//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4105//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment -

          it seems that tests timed out in an unrelated test, running an new test-patch.

          Show
          Alejandro Abdelnur added a comment - it seems that tests timed out in an unrelated test, running an new test-patch.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12651358/COMBO.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The following test timeouts occurred in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

          org.apache.hadoop.http.TestHttpServer

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4104//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4104//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12651358/COMBO.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The following test timeouts occurred in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms: org.apache.hadoop.http.TestHttpServer +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4104//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4104//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment -

          combo of HADOOP-10719 and HADOOP-10720 (new one), for test-patch purposes.

          Show
          Alejandro Abdelnur added a comment - combo of HADOOP-10719 and HADOOP-10720 (new one), for test-patch purposes.
          Hide
          Alejandro Abdelnur added a comment -

          updated patch which takes care of the 2 versions of the generate/decrypt methods (prior patch I've forgot to 'git add -u' after finishing the changes).

          Show
          Alejandro Abdelnur added a comment - updated patch which takes care of the 2 versions of the generate/decrypt methods (prior patch I've forgot to 'git add -u' after finishing the changes).
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12651305/COMBO.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4102//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4102//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12651305/COMBO.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4102//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4102//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12651249/HADOOP-10719-10720-COMBO-forTestPatch.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          -1 javac. The patch appears to cause the build to fail.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4098//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12651249/HADOOP-10719-10720-COMBO-forTestPatch.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. -1 javac . The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4098//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment -

          combo for HADOOP-10719 and this JIRA for testpatch purposes only.

          Show
          Alejandro Abdelnur added a comment - combo for HADOOP-10719 and this JIRA for testpatch purposes only.
          Hide
          Alejandro Abdelnur added a comment -

          patch depends on HADOOP-10719.

          Show
          Alejandro Abdelnur added a comment - patch depends on HADOOP-10719 .

            People

            • Assignee:
              Arun Suresh
              Reporter:
              Alejandro Abdelnur
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development