Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-6606

Optimize HDFS Encrypted Transport performance

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.6.0
    • Component/s: datanode, hdfs-client, security
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed
    • Release Note:
      HDFS now supports the option to configure AES encryption for block data transfer. AES offers improved cryptographic strength and performance over the prior options of 3DES and RC4.

      Description

      In HDFS-3637, Aaron T. Myers added support for encrypting the DataTransferProtocol, it was a great work.
      It utilizes SASL Digest-MD5 mechanism (use Qop: auth-conf), it supports three security strength:

      • high 3des or rc4 (128bits)
      • medium des or rc4(56bits)
      • low rc4(40bits)

      3des and rc4 are slow, only tens of MB/s,
      http://www.javamex.com/tutorials/cryptography/ciphers.shtml
      http://www.cs.wustl.edu/~jain/cse567-06/ftp/encryption_perf/

      I will give more detailed performance data in future. Absolutely it’s bottleneck and will vastly affect the end to end performance.

      AES(Advanced Encryption Standard) is recommended as a replacement of DES, it’s more secure; with AES-NI support, the throughput can reach nearly 2GB/s, it won’t be the bottleneck any more, AES and CryptoCodec work is supported in HADOOP-10150, HADOOP-10603 and HADOOP-10693 (We may need to add a new mode support for AES).

      This JIRA will use AES with AES-NI support as encryption algorithm for DataTransferProtocol.

      1. HDFS-6606.001.patch
        39 kB
        Yi Liu
      2. HDFS-6606.002.patch
        41 kB
        Yi Liu
      3. HDFS-6606.003.patch
        42 kB
        Yi Liu
      4. HDFS-6606.004.patch
        45 kB
        Yi Liu
      5. HDFS-6606.005.patch
        45 kB
        Yi Liu
      6. HDFS-6606.006.patch
        45 kB
        Yi Liu
      7. HDFS-6606.007.patch
        46 kB
        Yi Liu
      8. HDFS-6606.008.patch
        47 kB
        Yi Liu
      9. HDFS-6606.009.patch
        47 kB
        Yi Liu
      10. OptimizeHdfsEncryptedTransportperformance.pdf
        316 kB
        Yi Liu

        Issue Links

          Activity

          Hide
          Alejandro Abdelnur added a comment -

          Yi Liu, it is great you are taking on this, looking forward to see the patch.

          For now I have a question, any reason not to do the same for the Hadoop RPC encryption?

          Show
          Alejandro Abdelnur added a comment - Yi Liu , it is great you are taking on this, looking forward to see the patch. For now I have a question, any reason not to do the same for the Hadoop RPC encryption?
          Hide
          Mike Yoder added a comment -

          Agreed this is a great thing to have - the existing choices are all bad.

          Show
          Mike Yoder added a comment - Agreed this is a great thing to have - the existing choices are all bad.
          Hide
          Andrew Purtell added a comment -

          For now I have a question, any reason not to do the same for the Hadoop RPC encryption?

          Krb5 and Java SE support AES modes:: http://docs.oracle.com/javase/7/docs/technotes/guides/security/jgss/jgss-features.html . I think the JCE provider can be swapped for an accelerated option.

          Show
          Andrew Purtell added a comment - For now I have a question, any reason not to do the same for the Hadoop RPC encryption? Krb5 and Java SE support AES modes:: http://docs.oracle.com/javase/7/docs/technotes/guides/security/jgss/jgss-features.html . I think the JCE provider can be swapped for an accelerated option.
          Hide
          Yi Liu added a comment -

          Thanks Alejandro Abdelnur, Mike Yoder and Andrew Purtell for your comments.

          Alejandro Abdelnur:
          I file JIRA HADOOP-10768 for Optimizing Hadoop RPC encryption performance. Not file that JIRA before because 1) Hadoop utilizes SASL GSSAPI and DIGEST-MD5 mechanisms for secure authentication and data protection for RPC, not able to add custom encryption to them. 2) PRC message is small, whether it is worth.
          For #1, you remained me we could only use GssKrb5 to exchange user secrets, not do encryption for whole RPC message, instead use the same way in this JIRA to encrypt RPC message. You are right.
          For #2, we all think we can have benchmark to see real benefit, then we make a trade-off.

          Andrew Purtell:
          Thanks for the information, you are right, but it doesn't support AES-NI by default. Maybe we can handle it in the same way as in this JIRA. It's more flexiable and can resolve encryption issue of DIGEST-MD5.

          Show
          Yi Liu added a comment - Thanks Alejandro Abdelnur , Mike Yoder and Andrew Purtell for your comments. Alejandro Abdelnur : I file JIRA HADOOP-10768 for Optimizing Hadoop RPC encryption performance. Not file that JIRA before because 1) Hadoop utilizes SASL GSSAPI and DIGEST-MD5 mechanisms for secure authentication and data protection for RPC, not able to add custom encryption to them. 2) PRC message is small, whether it is worth. For #1, you remained me we could only use GssKrb5 to exchange user secrets, not do encryption for whole RPC message, instead use the same way in this JIRA to encrypt RPC message. You are right. For #2, we all think we can have benchmark to see real benefit, then we make a trade-off. Andrew Purtell : Thanks for the information, you are right, but it doesn't support AES-NI by default. Maybe we can handle it in the same way as in this JIRA. It's more flexiable and can resolve encryption issue of DIGEST-MD5 .
          Hide
          Andrew Purtell added a comment -

          Thanks for the information, you are right, but it doesn't support AES-NI by default. Maybe we can handle it in the same way as in this JIRA. It's more flexiable and can resolve encryption issue of DIGEST-MD5.

          I see you opened HADOOP-10768 for that.

          Show
          Andrew Purtell added a comment - Thanks for the information, you are right, but it doesn't support AES-NI by default. Maybe we can handle it in the same way as in this JIRA. It's more flexiable and can resolve encryption issue of DIGEST-MD5. I see you opened HADOOP-10768 for that.
          Hide
          Yi Liu added a comment -

          Attach a brief design for this optimization.
          Our goals are:

          • Support using CryptoCodec for encryption of HDFS transport. By default client and server will negotiate to use AES-CTR.
          • Compatibility: for old client or old server, it still works.
          Show
          Yi Liu added a comment - Attach a brief design for this optimization. Our goals are: Support using CryptoCodec for encryption of HDFS transport. By default client and server will negotiate to use AES-CTR. Compatibility: for old client or old server, it still works.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12664402/HDFS-6606.001.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7769//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7769//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12664402/HDFS-6606.001.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7769//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7769//console This message is automatically generated.
          Hide
          Yi Liu added a comment -

          Small update: only send the cipher options for negotiation if the requested Qop contains privacy(auth-conf).

          Show
          Yi Liu added a comment - Small update: only send the cipher options for negotiation if the requested Qop contains privacy(auth-conf).
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12665279/HDFS-6606.002.patch
          against trunk revision fa80ca4.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7854//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7854//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12665279/HDFS-6606.002.patch against trunk revision fa80ca4. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7854//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7854//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment -

          Yi, the approach LGTM. I'll ping Aaron T. Myers in case I'm missing something.

          My feedback on the patch:

          CryptoInputStream.java:

          • read(BB), the if (n >- 0) THEN ELSE section at the end, why do we need this now and not before? or it is a bug in CIS?

          DataTransferSaslUtils.java:

          • requestedQopContainsPrivacy(), should’t we trim() the values in the set? just in case somebody added a whitespace, or this is not possible?
          • sendSaslMessageAnNegotiationCipherOptions(), can payload be NULL? or that is an error if we get here? if the later we should throw an exception.
          • createStreamPair(), we are using the same key and IV for both streams. we should either use 2 different KEYs or 2 different IVs, not to repeat the use of a KEY/IV pair. or simply exchanging 2 cipheroptions, one for IN and one for OUT.
          • the use wrapping/unwrapping seems too asymmetrical how is implemented, is it possible to have 2 symmetric methods used at the same level on both sides?

          hdfs.proto:

          • ChiperOption, I don’t think the fields should be optional, they are all required, no?

          SaslDataTransferServer.java:

          • why do we need 2 confs?, what was wrong with the original one?
          Show
          Alejandro Abdelnur added a comment - Yi, the approach LGTM. I'll ping Aaron T. Myers in case I'm missing something. My feedback on the patch: CryptoInputStream.java : read(BB), the if (n >- 0) THEN ELSE section at the end, why do we need this now and not before? or it is a bug in CIS? DataTransferSaslUtils.java : requestedQopContainsPrivacy(), should’t we trim() the values in the set? just in case somebody added a whitespace, or this is not possible? sendSaslMessageAnNegotiationCipherOptions(), can payload be NULL? or that is an error if we get here? if the later we should throw an exception. createStreamPair(), we are using the same key and IV for both streams. we should either use 2 different KEYs or 2 different IVs, not to repeat the use of a KEY/IV pair. or simply exchanging 2 cipheroptions, one for IN and one for OUT. the use wrapping/unwrapping seems too asymmetrical how is implemented, is it possible to have 2 symmetric methods used at the same level on both sides? hdfs.proto : ChiperOption, I don’t think the fields should be optional, they are all required, no? SaslDataTransferServer.java : why do we need 2 confs?, what was wrong with the original one?
          Hide
          Yi Liu added a comment -

          Thanks Alejandro Abdelnur for the review.

          read(BB), the if (n >- 0) THEN ELSE section at the end, why do we need this now and not before? or it is a bug in CIS?

          It's a bug in CIS, and I file a separate JIRA: HADOOP-11040, please also help to review.

          equestedQopContainsPrivacy(), should’t we trim() the values in the set? just in case somebody added a whitespace, or this is not possible?

          When the qop configs are added to Sasl properties, they have already been trimmed, so we don't need to trim again.

          sendSaslMessageAnNegotiationCipherOptions(), can payload be NULL? or that is an error if we get here? if the later we should throw an exception.

          A Sasl response indeed could be null if the challenge accompanied a "SUCCESS", but in this case, it's response for the first challenge and could not possible be null. It's the same logic as original code, we just add the negotiation cipher options in this step.

          createStreamPair(), we are using the same key and IV for both streams. we should either use 2 different KEYs or 2 different IVs, not to repeat the use of a KEY/IV pair. or simply exchanging 2 cipheroptions, one for IN and one for OUT.

          You are right, let's generate two key/iv pairs for IN and OUT.

          the use wrapping/unwrapping seems too asymmetrical how is implemented, is it possible to have 2 symmetric methods used at the same level on both sides?

          Right, I will improve this.

          ChiperOption, I don’t think the fields should be optional, they are all required, no?

          CipherSuite field is required, and key/iv fields should be optional. Since client sends several CipherOptions for negotiation (only need CipherSuite field), then server will choose one of cipheroption it supports and fill in key/iv(key should be encrypted) when the Sasl qop negotiation successful.

          why do we need 2 confs?, what was wrong with the original one?

          We need a Configuration to construct CryptoCodec, but the existing one is DNConf.

          Will update the patch later.

          Show
          Yi Liu added a comment - Thanks Alejandro Abdelnur for the review. read(BB), the if (n >- 0) THEN ELSE section at the end, why do we need this now and not before? or it is a bug in CIS? It's a bug in CIS, and I file a separate JIRA: HADOOP-11040 , please also help to review. equestedQopContainsPrivacy(), should’t we trim() the values in the set? just in case somebody added a whitespace, or this is not possible? When the qop configs are added to Sasl properties, they have already been trimmed, so we don't need to trim again. sendSaslMessageAnNegotiationCipherOptions(), can payload be NULL? or that is an error if we get here? if the later we should throw an exception. A Sasl response indeed could be null if the challenge accompanied a "SUCCESS", but in this case, it's response for the first challenge and could not possible be null. It's the same logic as original code, we just add the negotiation cipher options in this step. createStreamPair(), we are using the same key and IV for both streams. we should either use 2 different KEYs or 2 different IVs, not to repeat the use of a KEY/IV pair. or simply exchanging 2 cipheroptions, one for IN and one for OUT. You are right, let's generate two key/iv pairs for IN and OUT. the use wrapping/unwrapping seems too asymmetrical how is implemented, is it possible to have 2 symmetric methods used at the same level on both sides? Right, I will improve this. ChiperOption, I don’t think the fields should be optional, they are all required, no? CipherSuite field is required, and key/iv fields should be optional. Since client sends several CipherOptions for negotiation (only need CipherSuite field), then server will choose one of cipheroption it supports and fill in key/iv(key should be encrypted) when the Sasl qop negotiation successful. why do we need 2 confs?, what was wrong with the original one? We need a Configuration to construct CryptoCodec, but the existing one is DNConf . Will update the patch later.
          Hide
          Yi Liu added a comment -

          Update the patch for all comments.

          why do we need 2 confs?, what was wrong with the original one?

          In the new patch, use DNConf to get Configuration

          HADOOP-11040 will not affect this JIRA, so all tests are expected to pass.

          Show
          Yi Liu added a comment - Update the patch for all comments. why do we need 2 confs?, what was wrong with the original one? In the new patch, use DNConf to get Configuration HADOOP-11040 will not affect this JIRA, so all tests are expected to pass.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12665737/HDFS-6606.003.patch
          against trunk revision 258c7d0.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
          org.apache.hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7866//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7866//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12665737/HDFS-6606.003.patch against trunk revision 258c7d0. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7866//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7866//console This message is automatically generated.
          Hide
          Yi Liu added a comment -

          The tests failure are not related, I can run them successfully in local environment.

          Show
          Yi Liu added a comment - The tests failure are not related, I can run them successfully in local environment.
          Hide
          Chris Nauroth added a comment -

          Hi, Yi Liu. Nice work! This looks like it's fully compatible too with the recent work in HDFS-2856 to remove the requirement to run DataNode as root.

          If I understand correctly, the DFSClient is still going to contact the NameNode to obtain an encryption key via ClientProtocol#getDataEncryptionKey when dfs.encrypt.data.transfer is true, but then the result wouldn't actually be used if a cipher is negotiated. It's a shame to keep around that extraneous RPC, but it's very small, and I don't see an easy way to change the code to avoid it. Maybe we could queue this up for future consideration.

          I'd just like to suggest a few more tests:

          1. TestSaslDataTransfer: A new test here would validate that it works with the HDFS-2856 style, setting dfs.data.transfer.protection instead of dfs.encrypt.data.transfer.
          2. TestBalancerWithEncryptedTransfer: A new test here would validate that everything works correctly end-to-end with the balancer.
          3. TestBalancerWithSaslDataTransfer: Same as #2, using the HDFS-2856 style with dfs.data.transfer.protection configured instead of dfs.encrypt.data.transfer.
          Show
          Chris Nauroth added a comment - Hi, Yi Liu . Nice work! This looks like it's fully compatible too with the recent work in HDFS-2856 to remove the requirement to run DataNode as root. If I understand correctly, the DFSClient is still going to contact the NameNode to obtain an encryption key via ClientProtocol#getDataEncryptionKey when dfs.encrypt.data.transfer is true, but then the result wouldn't actually be used if a cipher is negotiated. It's a shame to keep around that extraneous RPC, but it's very small, and I don't see an easy way to change the code to avoid it. Maybe we could queue this up for future consideration. I'd just like to suggest a few more tests: TestSaslDataTransfer : A new test here would validate that it works with the HDFS-2856 style, setting dfs.data.transfer.protection instead of dfs.encrypt.data.transfer . TestBalancerWithEncryptedTransfer : A new test here would validate that everything works correctly end-to-end with the balancer. TestBalancerWithSaslDataTransfer : Same as #2, using the HDFS-2856 style with dfs.data.transfer.protection configured instead of dfs.encrypt.data.transfer .
          Hide
          Yi Liu added a comment - - edited

          Thanks Chris Nauroth for review. You are right, this JIRA is fully compatible with the work in HDFS-2856.

          If I understand correctly, the DFSClient is still going to contact the NameNode to obtain an encryption key via ClientProtocol#getDataEncryptionKey when dfs.encrypt.data.transfer is true, but then the result wouldn't actually be used if a cipher is negotiated. It's a shame to keep around that extraneous RPC, but it's very small, and I don't see an easy way to change the code to avoid it. Maybe we could queue this up for future consideration.

          Right, the DFSClient is still going to contract the NN to obtain a key via ClientProtocol#getDataEncryptionKey. But the obtained key is still used while a cipher option is being negotiated, we use it to encrypt the negotiated cipher key (using sasl wrap/unwrap); so the key obtained via getDataEncryptionKey is only used to encrypt the cipher key, and data is now encrypted by cipher key.

          The proposed approach doesn't add extra RPC and works for original configuration: If dfs.encrypt.data.transfer is true or dfs.data.transfer.protection (HDFS-2856 style) is privacy, dfs client and datanode will negotiate cipher for encryption.

          The tests you suggest are pretty good. We may only need to check the dfs client and datanode indeed negotiate cipher option for current implementation in these tests, since:

          • TestSaslDataTransfer and TestBalancerWithSaslDataTransfer already use dfs.data.transfer.protection and cover encryption test (privacy), and TestBalancerWithEncryptedTransfer already includes end-to-end tests with the balancer and the proposed approach works with original configuration.

          I will update the patch for your comments later.

          Show
          Yi Liu added a comment - - edited Thanks Chris Nauroth for review. You are right, this JIRA is fully compatible with the work in HDFS-2856 . If I understand correctly, the DFSClient is still going to contact the NameNode to obtain an encryption key via ClientProtocol#getDataEncryptionKey when dfs.encrypt.data.transfer is true, but then the result wouldn't actually be used if a cipher is negotiated. It's a shame to keep around that extraneous RPC, but it's very small, and I don't see an easy way to change the code to avoid it. Maybe we could queue this up for future consideration. Right, the DFSClient is still going to contract the NN to obtain a key via ClientProtocol#getDataEncryptionKey . But the obtained key is still used while a cipher option is being negotiated, we use it to encrypt the negotiated cipher key (using sasl wrap/unwrap); so the key obtained via getDataEncryptionKey is only used to encrypt the cipher key, and data is now encrypted by cipher key. The proposed approach doesn't add extra RPC and works for original configuration: If dfs.encrypt.data.transfer is true or dfs.data.transfer.protection ( HDFS-2856 style) is privacy , dfs client and datanode will negotiate cipher for encryption. The tests you suggest are pretty good. We may only need to check the dfs client and datanode indeed negotiate cipher option for current implementation in these tests, since: TestSaslDataTransfer and TestBalancerWithSaslDataTransfer already use dfs.data.transfer.protection and cover encryption test ( privacy ), and TestBalancerWithEncryptedTransfer already includes end-to-end tests with the balancer and the proposed approach works with original configuration. I will update the patch for your comments later.
          Hide
          Chris Nauroth added a comment -

          But the obtained key is still used while a cipher option is being negotiated, we use it to encrypt the negotiated cipher key (using sasl wrap/unwrap)...

          Thanks for clarifying, Yi. I missed the significance of this part. One additional note though: in the case of setting dfs.data.transfer.protection to privacy, the client will not fetch an encryption key from the NameNode. Instead, the SASL handshake password is based on the block access token password. The main difference here compared to dfs.encrypt.data.transfer is the lack of a "per-session" nonce and the ability to control the encryption algorithm used by setting dfs.encrypt.data.transfer.algorithm. In that sense, dfs.encrypt.data.transfer still has some capabilities that you can't get by using dfs.data.transfer.protection.

          I agree now that existing tests cover it, and you can disregard my earlier suggestions. I don't see any additional configuration variations to test.

          I'm +1 for patch v3, pending resolution of feedback from Alejandro Abdelnur too. Thanks again!

          Show
          Chris Nauroth added a comment - But the obtained key is still used while a cipher option is being negotiated, we use it to encrypt the negotiated cipher key (using sasl wrap/unwrap)... Thanks for clarifying, Yi. I missed the significance of this part. One additional note though: in the case of setting dfs.data.transfer.protection to privacy , the client will not fetch an encryption key from the NameNode. Instead, the SASL handshake password is based on the block access token password. The main difference here compared to dfs.encrypt.data.transfer is the lack of a "per-session" nonce and the ability to control the encryption algorithm used by setting dfs.encrypt.data.transfer.algorithm . In that sense, dfs.encrypt.data.transfer still has some capabilities that you can't get by using dfs.data.transfer.protection . I agree now that existing tests cover it, and you can disregard my earlier suggestions. I don't see any additional configuration variations to test. I'm +1 for patch v3, pending resolution of feedback from Alejandro Abdelnur too. Thanks again!
          Hide
          Alejandro Abdelnur added a comment -

          DataTransferSaslUtil.java#negotiateCipherOption() method, hardcoding 16 bytes (128bits) for keys, any reason this is not configurable so we can use 192 or 256?

          Also, if we are transferring data for a file that is in an encryption zone, the data is already encrypted. Thus, we could do the transfer without encryption and avoid a the penalty of an unnecessary double encryption. Though we can tackle this in a follow up JIRA.

          Show
          Alejandro Abdelnur added a comment - DataTransferSaslUtil.java#negotiateCipherOption() method, hardcoding 16 bytes (128bits) for keys, any reason this is not configurable so we can use 192 or 256? Also, if we are transferring data for a file that is in an encryption zone, the data is already encrypted. Thus, we could do the transfer without encryption and avoid a the penalty of an unnecessary double encryption. Though we can tackle this in a follow up JIRA.
          Hide
          Yi Liu added a comment -

          Thanks Chris Nauroth for the review and the explanation of the difference for the encryption key(password) between those two.

          Show
          Yi Liu added a comment - Thanks Chris Nauroth for the review and the explanation of the difference for the encryption key(password) between those two.
          Hide
          Yi Liu added a comment -

          Thanks Alejandro Abdelnur for review, I will make the key length configurable and update the patch later.
          For the encryption zone, you are right and I will handle it in a follow up JIRA.

          Show
          Yi Liu added a comment - Thanks Alejandro Abdelnur for review, I will make the key length configurable and update the patch later. For the encryption zone, you are right and I will handle it in a follow up JIRA.
          Hide
          Yi Liu added a comment -

          Update the patch for Alejandro Abdelnur's latest comment.

          Show
          Yi Liu added a comment - Update the patch for Alejandro Abdelnur 's latest comment.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12667329/HDFS-6606.004.patch
          against trunk revision 7498dd7.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
          org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
          org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7963//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7963//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12667329/HDFS-6606.004.patch against trunk revision 7498dd7. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7963//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7963//console This message is automatically generated.
          Hide
          Yi Liu added a comment -

          The tests failures are not related. For TestSaslDataTransfer, Jenkins said NoClassDefFoundError for CipherOption, but actually it did exist. I can run it successfully in my local env.

          Show
          Yi Liu added a comment - The tests failures are not related. For TestSaslDataTransfer , Jenkins said NoClassDefFoundError for CipherOption , but actually it did exist. I can run it successfully in my local env.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12667329/HDFS-6606.004.patch
          against trunk revision 90c8ece.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
          org.apache.hadoop.hdfs.TestRollingUpgrade

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7965//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7965//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12667329/HDFS-6606.004.patch against trunk revision 90c8ece. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.TestRollingUpgrade +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7965//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7965//console This message is automatically generated.
          Hide
          Srikanth Upputuri added a comment -

          This is a very nice effort. It's a great deal of learning for me reading through this jira and HDFS-3637. But I have a couple of fundamental questions here.

          Does this patch improve data transfer speed? But isn't the existing RC4 option much faster (as shown in the comparison analysis)?

          Does this patch improve the data transfer channel confidentiality? But, if we transfer the AES keys and IVs over a 3DES encrypted channel, isn't the overall confidentiality effectively same as someone who can successfully intercept and decrypt the 3DES traffic can read the AES keys?

          Am I missing something here?

          Show
          Srikanth Upputuri added a comment - This is a very nice effort. It's a great deal of learning for me reading through this jira and HDFS-3637 . But I have a couple of fundamental questions here. Does this patch improve data transfer speed? But isn't the existing RC4 option much faster (as shown in the comparison analysis)? Does this patch improve the data transfer channel confidentiality? But, if we transfer the AES keys and IVs over a 3DES encrypted channel, isn't the overall confidentiality effectively same as someone who can successfully intercept and decrypt the 3DES traffic can read the AES keys? Am I missing something here?
          Hide
          Yi Liu added a comment -

          Thanks Srikanth Upputuri for taking a look.

          Does this patch improve data transfer speed? But isn't the existing RC4 option much faster (as shown in the comparison analysis)?

          Sure, this JIRA uses CryptoCodec (default: AES with AES-NI support), which is much faster. (RC4 is less than 100M/s, but AES with AES-NI support is more than 1.5GB/s)

          Does this patch improve the data transfer channel confidentiality? But, if we transfer the AES keys and IVs over a 3DES encrypted channel, isn't the overall confidentiality effectively same as someone who can successfully intercept and decrypt the 3DES traffic can read the AES keys?

          In this JIRA, 3DES is used to encrypt/decrypt the negotiated cipher key (originally it was used to encrypt the transferred data). You are right, the channel confidentiality is the same, but it's enough. Our goal is to improve the performance.

          Show
          Yi Liu added a comment - Thanks Srikanth Upputuri for taking a look. Does this patch improve data transfer speed? But isn't the existing RC4 option much faster (as shown in the comparison analysis)? Sure, this JIRA uses CryptoCodec (default: AES with AES-NI support), which is much faster. (RC4 is less than 100M/s, but AES with AES-NI support is more than 1.5GB/s) Does this patch improve the data transfer channel confidentiality? But, if we transfer the AES keys and IVs over a 3DES encrypted channel, isn't the overall confidentiality effectively same as someone who can successfully intercept and decrypt the 3DES traffic can read the AES keys? In this JIRA, 3DES is used to encrypt/decrypt the negotiated cipher key (originally it was used to encrypt the transferred data). You are right, the channel confidentiality is the same, but it's enough. Our goal is to improve the performance.
          Hide
          Yi Liu added a comment -

          BTW, the AES performance you saw is without AES-NI support.

          Show
          Yi Liu added a comment - BTW, the AES performance you saw is without AES-NI support.
          Hide
          Srikanth Upputuri added a comment -

          In this JIRA, 3DES is used to encrypt/decrypt the negotiated cipher key (originally it was used to encrypt the transferred data). You are right, the channel confidentiality is the same, but it's enough. Our goal is to improve the performance.

          Thank you for the explanation. I read about AES-NI and I now understand that with a JCE provider like Diceros AES performance will significantly improve. However, if we need to provide support for increased confidentiality with AES, can we not do it by implementing GSSAPI mechanism in addition to the existing DIGEST-MD5, the same way it is implemented for rpc? The java gss api has support for AES anyway as described in http://docs.oracle.com/javase/7/docs/technotes/guides/security/jgss/jgss-features.html. That way we get better performance (with AES-NI support) as well as better data privacy. I have read through all the comments but didn't quite get why this approach is not considered. Any reasons?

          Show
          Srikanth Upputuri added a comment - In this JIRA, 3DES is used to encrypt/decrypt the negotiated cipher key (originally it was used to encrypt the transferred data). You are right, the channel confidentiality is the same, but it's enough. Our goal is to improve the performance. Thank you for the explanation. I read about AES-NI and I now understand that with a JCE provider like Diceros AES performance will significantly improve. However, if we need to provide support for increased confidentiality with AES, can we not do it by implementing GSSAPI mechanism in addition to the existing DIGEST-MD5, the same way it is implemented for rpc? The java gss api has support for AES anyway as described in http://docs.oracle.com/javase/7/docs/technotes/guides/security/jgss/jgss-features.html . That way we get better performance (with AES-NI support) as well as better data privacy. I have read through all the comments but didn't quite get why this approach is not considered. Any reasons?
          Hide
          Yi Liu added a comment -

          Rebase the patch for latest trunk.

          Srikanth Upputuri, Jaas GSSAPI mechanism indeed supports AES, but it's not suitable here, client need to make sure the DN is legal too. For DIGEST-MD5, the password is generated using the accessToken or encryption key, by this way, DN can validate client and the client can also validate whether DN is legal (ensure block access token not got by malicious process). But for GSSAPI mechanism, we can't ensure this and have performance issue. Another reason is that not all users could use third-party JCE provider; if using CryptoCodec, it's scalable and have built-in support for AES-NI in Hadoop.

          Show
          Yi Liu added a comment - Rebase the patch for latest trunk. Srikanth Upputuri , Jaas GSSAPI mechanism indeed supports AES, but it's not suitable here, client need to make sure the DN is legal too. For DIGEST-MD5, the password is generated using the accessToken or encryption key, by this way, DN can validate client and the client can also validate whether DN is legal (ensure block access token not got by malicious process). But for GSSAPI mechanism, we can't ensure this and have performance issue. Another reason is that not all users could use third-party JCE provider; if using CryptoCodec, it's scalable and have built-in support for AES-NI in Hadoop.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12669675/HDFS-6606.005.patch
          against trunk revision ee21b13.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8077//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8077//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12669675/HDFS-6606.005.patch against trunk revision ee21b13. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8077//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8077//console This message is automatically generated.
          Hide
          Yi Liu added a comment -

          Rebase the patch for latest trunk again.

          Show
          Yi Liu added a comment - Rebase the patch for latest trunk again.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12670626/HDFS-6606.006.patch
          against trunk revision a9a55db.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.crypto.random.TestOsSecureRandom
          org.apache.hadoop.ha.TestZKFailoverControllerStress
          org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8161//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8161//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12670626/HDFS-6606.006.patch against trunk revision a9a55db. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.crypto.random.TestOsSecureRandom org.apache.hadoop.ha.TestZKFailoverControllerStress org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8161//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8161//console This message is automatically generated.
          Hide
          Yi Liu added a comment -

          Test failures are unrelated.
          Aaron T. Myers and Alejandro Abdelnur, do you have further comments? Thanks.

          Show
          Yi Liu added a comment - Test failures are unrelated. Aaron T. Myers and Alejandro Abdelnur , do you have further comments? Thanks.
          Hide
          Aaron T. Myers added a comment -

          The latest patch looks pretty good to me. I have one question and one small suggestion.

          Question: Am I reading this correctly that after this patch if both the client and server support AES that we have no way for clients to continue to use 3des, rc4, or des for data encryption? That may be acceptable if we think that AES is in all cases strictly superior to those other algorithms, but if so we should definitely call this out in the hdfs-default.xml description of "dfs.encrypt.data.transfer.algorithm". I'm thinking something along the lines of "note that if AES is supported by both the client and server then this encryption algorithm will only be used to initially transfer keys for AES."

          Suggestion: Now that DataTransferSaslUtil#performSaslStep1 is only used in one place in the code, might just want to get rid of that function and inline its functionality.

          Thanks a lot, Yi. This is great work.

          Show
          Aaron T. Myers added a comment - The latest patch looks pretty good to me. I have one question and one small suggestion. Question: Am I reading this correctly that after this patch if both the client and server support AES that we have no way for clients to continue to use 3des, rc4, or des for data encryption? That may be acceptable if we think that AES is in all cases strictly superior to those other algorithms, but if so we should definitely call this out in the hdfs-default.xml description of "dfs.encrypt.data.transfer.algorithm". I'm thinking something along the lines of "note that if AES is supported by both the client and server then this encryption algorithm will only be used to initially transfer keys for AES." Suggestion: Now that DataTransferSaslUtil#performSaslStep1 is only used in one place in the code, might just want to get rid of that function and inline its functionality. Thanks a lot, Yi. This is great work.
          Hide
          Mike Yoder added a comment -

          That may be acceptable if we think that AES is in all cases strictly superior to those other algorithms

          I can assure you that AES is superior to 3des and rc4!

          Show
          Mike Yoder added a comment - That may be acceptable if we think that AES is in all cases strictly superior to those other algorithms I can assure you that AES is superior to 3des and rc4!
          Hide
          Yi Liu added a comment -

          Thanks Aaron T. Myers for review and Mike Yoder for comment.
          Agree with Mike that AES(Advanced Encryption Standard) is superior to 3des and rc4 in all cases, http://en.wikipedia.org/wiki/Advanced_Encryption_Standard .

          Will update the description of dfs.encrypt.data.transfer.algorithm and make performSaslStep1 inline its functionality later.

          Show
          Yi Liu added a comment - Thanks Aaron T. Myers for review and Mike Yoder for comment. Agree with Mike that AES(Advanced Encryption Standard) is superior to 3des and rc4 in all cases, http://en.wikipedia.org/wiki/Advanced_Encryption_Standard . Will update the description of dfs.encrypt.data.transfer.algorithm and make performSaslStep1 inline its functionality later.
          Hide
          Yi Liu added a comment -

          Update patch to address all comments.

          Show
          Yi Liu added a comment - Update patch to address all comments.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12670913/HDFS-6606.007.patch
          against trunk revision ef784a2.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.crypto.random.TestOsSecureRandom
          org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken
          org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract

          The following test timeouts occurred in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.TestDecommission

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8180//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/8180//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-common.html
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8180//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12670913/HDFS-6606.007.patch against trunk revision ef784a2. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.crypto.random.TestOsSecureRandom org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract The following test timeouts occurred in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestDecommission +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8180//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/8180//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8180//console This message is automatically generated.
          Hide
          Yi Liu added a comment -

          The tests failures are not related. Findbugs is also not related, it was introduced by HADOOP-11017, and I file a JIRA HADOOP-11129 to fix it.

          Show
          Yi Liu added a comment - The tests failures are not related. Findbugs is also not related, it was introduced by HADOOP-11017 , and I file a JIRA HADOOP-11129 to fix it.
          Hide
          Yi Liu added a comment -

          Rebase the patch.

          Show
          Yi Liu added a comment - Rebase the patch.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12671386/HDFS-6606.008.patch
          against trunk revision 4ea77ef.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 2 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8216//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/8216//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/8216//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8216//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12671386/HDFS-6606.008.patch against trunk revision 4ea77ef. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 2 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8216//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/8216//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/8216//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-hdfs.html Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8216//console This message is automatically generated.
          Show
          Yi Liu added a comment - The tests failure is unrelated . The two findbugs are unrelated too, and the real links are: https://builds.apache.org/job/PreCommit-HDFS-Build/8216/artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html https://builds.apache.org/job/PreCommit-HDFS-Build/8216/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
          Hide
          Chris Nauroth added a comment -

          Hello Yi, and all of the reviewers. What work remains for this patch, and can I do anything to help? I had been +1 way back on patch version 3, but then there were a few more rounds of feedback. Has all of the feedback been addressed? This is great work, and I'd love to see it go into 2.6.0.

          Show
          Chris Nauroth added a comment - Hello Yi, and all of the reviewers. What work remains for this patch, and can I do anything to help? I had been +1 way back on patch version 3, but then there were a few more rounds of feedback. Has all of the feedback been addressed? This is great work, and I'd love to see it go into 2.6.0.
          Hide
          Yi Liu added a comment -

          Thanks Chris All of the feedback has been addressed. ATM gave the latest feedback and would help to look again after I updated, but he was busy recently (I talked with him offline).
          So let me ping him again for help to look again in the two days.

          Show
          Yi Liu added a comment - Thanks Chris All of the feedback has been addressed. ATM gave the latest feedback and would help to look again after I updated, but he was busy recently (I talked with him offline). So let me ping him again for help to look again in the two days.
          Hide
          Suresh Srinivas added a comment -

          Aaron T. Myers, can you please comment on this jira? If there is no comment in next three days, we should proceed with committing this change if Chris Nauroth is +1 on this.

          Show
          Suresh Srinivas added a comment - Aaron T. Myers , can you please comment on this jira? If there is no comment in next three days, we should proceed with committing this change if Chris Nauroth is +1 on this.
          Hide
          Aaron T. Myers added a comment -

          Sorry for the delay, folks. The latest patch looks good to me, +1. I don't have time right this second to actually check it in, but can in the next day or two. If someone else (Chris or Suresh or whomever) beats me to it, that'd certainly be fine by me.

          Good work, Yi.

          Show
          Aaron T. Myers added a comment - Sorry for the delay, folks. The latest patch looks good to me, +1. I don't have time right this second to actually check it in, but can in the next day or two. If someone else (Chris or Suresh or whomever) beats me to it, that'd certainly be fine by me. Good work, Yi.
          Hide
          Chris Nauroth added a comment -

          Thanks for checking in with your final review, Aaron.

          Yi, I reviewed the latest patch once again, and I found one more potential issue in SaslDataTransferServer:

                CipherOption cipherOption = null;
                if (sasl.isNegotiatedQopPrivacy()) {
                  // Negotiate a cipher option
                  cipherOption = negotiateCipherOption(dnConf.getConf(), cipherOptions);
                  if (LOG.isDebugEnabled()) {
                    LOG.debug("Server using cipher suite " + 
                        cipherOption.getCipherSuite().getName());
                  }
                }
          

          It's possible for negotiateCipherOption to return null when the connection comes from an older client version that doesn't do cipher negotiation. If debug logging is enabled, then the log statement would cause a NullPointerException.

          I'll be +1 after that's addressed, and I'm happy to volunteer for the commit.

          Show
          Chris Nauroth added a comment - Thanks for checking in with your final review, Aaron. Yi, I reviewed the latest patch once again, and I found one more potential issue in SaslDataTransferServer : CipherOption cipherOption = null ; if (sasl.isNegotiatedQopPrivacy()) { // Negotiate a cipher option cipherOption = negotiateCipherOption(dnConf.getConf(), cipherOptions); if (LOG.isDebugEnabled()) { LOG.debug( "Server using cipher suite " + cipherOption.getCipherSuite().getName()); } } It's possible for negotiateCipherOption to return null when the connection comes from an older client version that doesn't do cipher negotiation. If debug logging is enabled, then the log statement would cause a NullPointerException . I'll be +1 after that's addressed, and I'm happy to volunteer for the commit.
          Hide
          Yi Liu added a comment -

          Chris, you are right, I update the patch to address it. Thank you, ATM and tucu for the review. Also thanks your volunteer to commit
          Thanks Suresh, Andy, Mike and Srikanth for the comments.

          Show
          Yi Liu added a comment - Chris, you are right, I update the patch to address it. Thank you, ATM and tucu for the review. Also thanks your volunteer to commit Thanks Suresh, Andy, Mike and Srikanth for the comments.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12677559/HDFS-6606.009.patch
          against trunk revision 0398db1.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.fs.viewfs.TestViewFsHdfs
          org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8566//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8566//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12677559/HDFS-6606.009.patch against trunk revision 0398db1. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.fs.viewfs.TestViewFsHdfs org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8566//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8566//console This message is automatically generated.
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #6367 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6367/)
          HDFS-6606. Optimize HDFS Encrypted Transport performance. (yliu) (yliu: rev 58c0bb9ed9f4a2491395b63c68046562a73526c9)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherOption.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #6367 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6367/ ) HDFS-6606 . Optimize HDFS Encrypted Transport performance. (yliu) (yliu: rev 58c0bb9ed9f4a2491395b63c68046562a73526c9) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherOption.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
          Hide
          Yi Liu added a comment -

          Chris, I commit it to avoid rebase, since I see other JIRA doing small refactor for SaslParticipant.
          Thanks again for your review.

          Show
          Yi Liu added a comment - Chris, I commit it to avoid rebase, since I see other JIRA doing small refactor for SaslParticipant . Thanks again for your review.
          Hide
          Yi Liu added a comment -

          commit to trunk, branch-2, branch-2.6

          Show
          Yi Liu added a comment - commit to trunk, branch-2, branch-2.6
          Hide
          Chris Nauroth added a comment -

          I had forgotten that you can do your own commits now, Yi. Thank you for the patch, and thank you to all code reviewers.

          Show
          Chris Nauroth added a comment - I had forgotten that you can do your own commits now, Yi. Thank you for the patch, and thank you to all code reviewers.
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Yarn-trunk #727 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/727/)
          HDFS-6606. Optimize HDFS Encrypted Transport performance. (yliu) (yliu: rev 58c0bb9ed9f4a2491395b63c68046562a73526c9)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherOption.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk #727 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/727/ ) HDFS-6606 . Optimize HDFS Encrypted Transport performance. (yliu) (yliu: rev 58c0bb9ed9f4a2491395b63c68046562a73526c9) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherOption.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #1941 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1941/)
          HDFS-6606. Optimize HDFS Encrypted Transport performance. (yliu) (yliu: rev 58c0bb9ed9f4a2491395b63c68046562a73526c9)

          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherOption.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #1941 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1941/ ) HDFS-6606 . Optimize HDFS Encrypted Transport performance. (yliu) (yliu: rev 58c0bb9ed9f4a2491395b63c68046562a73526c9) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherOption.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Hdfs-trunk #1916 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1916/)
          HDFS-6606. Optimize HDFS Encrypted Transport performance. (yliu) (yliu: rev 58c0bb9ed9f4a2491395b63c68046562a73526c9)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherOption.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-Hdfs-trunk #1916 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1916/ ) HDFS-6606 . Optimize HDFS Encrypted Transport performance. (yliu) (yliu: rev 58c0bb9ed9f4a2491395b63c68046562a73526c9) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherOption.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java

            People

            • Assignee:
              Yi Liu
              Reporter:
              Yi Liu
            • Votes:
              0 Vote for this issue
              Watchers:
              19 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development