Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-10768

Optimize Hadoop RPC encryption performance

    Details

    • Type: Improvement
    • Status: Patch Available
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 3.0.0-alpha1
    • Fix Version/s: None
    • Component/s: performance, security
    • Labels:
      None
    • Target Version/s:

      Description

      Hadoop RPC encryption is enabled by setting hadoop.rpc.protection to "privacy". It utilized SASL GSSAPI and DIGEST-MD5 mechanisms for secure authentication and data protection. Even GSSAPI supports using AES, but without AES-NI support by default, so the encryption is slow and will become bottleneck.

      After discuss with Aaron T. Myers, Alejandro Abdelnur and Uma Maheswara Rao G, we can do the same optimization as in HDFS-6606. Use AES-NI with more than 20x speedup.

      On the other hand, RPC message is small, but RPC is frequent and there may be lots of RPC calls in one connection, we needs to setup benchmark to see real improvement and then make a trade-off.

      1. HADOOP-10768.001.patch
        71 kB
        Dian Fu
      2. HADOOP-10768.002.patch
        76 kB
        Dian Fu
      3. HADOOP-10768.003.patch
        78 kB
        Aaron T. Myers
      4. HADOOP-10768.004.patch
        121 kB
        Dapeng Sun
      5. HADOOP-10768.005.patch
        94 kB
        Dapeng Sun
      6. HADOOP-10768.006.patch
        95 kB
        Dapeng Sun
      7. HADOOP-10768.007.patch
        94 kB
        Dapeng Sun
      8. HADOOP-10768.008.patch
        94 kB
        Dapeng Sun
      9. Optimize Hadoop RPC encryption performance.pdf
        280 kB
        Dian Fu

        Issue Links

          Activity

          Hide
          dapengsun Dapeng Sun added a comment -

          Thank Daryn Sharp for your attention.

          The overhead are coming from integrity check(MD5) and crypto(En/Decryption).
          For INTEGRITY mode, the overhead is HmacMD5;
          For PRIVACY mode, the overhead is HmacMD5 + AES/DES.

          From the result, we can see the patch have improved throughput of crypto part a lot, the main bottleneck is not crypto part any more. If we want to reduce the degradation, we need continue to improve the throughput of integrity check. I will investigate how to improve it, if you have any idea or suggestion about it, please tell me

          Show
          dapengsun Dapeng Sun added a comment - Thank Daryn Sharp for your attention. The overhead are coming from integrity check (MD5) and crypto (En/Decryption). For INTEGRITY mode, the overhead is HmacMD5; For PRIVACY mode, the overhead is HmacMD5 + AES/DES. From the result, we can see the patch have improved throughput of crypto part a lot, the main bottleneck is not crypto part any more. If we want to reduce the degradation, we need continue to improve the throughput of integrity check. I will investigate how to improve it, if you have any idea or suggestion about it, please tell me
          Hide
          daryn Daryn Sharp added a comment -

          I'm swamped at the moment and will be in and out of office till after thanksgiving, but I will try to review this week since I'd really like this feature!

          Initial question: do you know why there's a ~50% degradation? That's concerning and may severely impede performance (to the point I can't use it ) since the handlers may be starved waiting for the degraded readers to decypt the messages. If you haven't already, please profile.

          Show
          daryn Daryn Sharp added a comment - I'm swamped at the moment and will be in and out of office till after thanksgiving, but I will try to review this week since I'd really like this feature! Initial question: do you know why there's a ~50% degradation? That's concerning and may severely impede performance (to the point I can't use it ) since the handlers may be starved waiting for the degraded readers to decypt the messages. If you haven't already, please profile.
          Hide
          dapengsun Dapeng Sun added a comment - - edited

          Hi Daryn Sharp, Aaron T. Myers

          TestDataNodeVolumeFailure is passed at my local env, and TestUnbuffer is unrelated, it still failed without the patch.

          Show
          dapengsun Dapeng Sun added a comment - - edited Hi Daryn Sharp , Aaron T. Myers TestDataNodeVolumeFailure is passed at my local env, and TestUnbuffer is unrelated, it still failed without the patch.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 13s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
                trunk Compile Tests
          0 mvndep 1m 31s Maven dependency ordering for branch
          +1 mvninstall 18m 41s trunk passed
          +1 compile 11m 53s trunk passed
          +1 checkstyle 2m 12s trunk passed
          +1 mvnsite 2m 52s trunk passed
          +1 shadedclient 15m 54s branch has no errors when building and testing our client artifacts.
          +1 findbugs 5m 1s trunk passed
          +1 javadoc 2m 18s trunk passed
                Patch Compile Tests
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 2m 15s the patch passed
          +1 compile 12m 1s the patch passed
          +1 cc 12m 1s the patch passed
          +1 javac 12m 1s the patch passed
          -0 checkstyle 2m 15s root: The patch generated 24 new + 872 unchanged - 14 fixed = 896 total (was 886)
          +1 mvnsite 2m 47s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 1s The patch has no ill-formed XML file.
          +1 shadedclient 10m 15s patch has no errors when building and testing our client artifacts.
          +1 findbugs 5m 31s the patch passed
          +1 javadoc 2m 28s the patch passed
                Other Tests
          +1 unit 8m 36s hadoop-common in the patch passed.
          +1 unit 1m 21s hadoop-hdfs-client in the patch passed.
          -1 unit 88m 34s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 38s The patch does not generate ASF License warnings.
          194m 57s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
            hadoop.fs.TestUnbuffer



          Subsystem Report/Notes
          Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639
          JIRA Issue HADOOP-10768
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12897899/HADOOP-10768.008.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc xml
          uname Linux 09284222731d 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/patchprocess/precommit/personality/provided.sh
          git revision trunk / b1941b2
          maven version: Apache Maven 3.3.9
          Default Java 1.8.0_151
          findbugs v3.1.0-RC1
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13694/artifact/out/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13694/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13694/testReport/
          Max. process+thread count 3634 (vs. ulimit of 5000)
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13694/console
          Powered by Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 13s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.       trunk Compile Tests 0 mvndep 1m 31s Maven dependency ordering for branch +1 mvninstall 18m 41s trunk passed +1 compile 11m 53s trunk passed +1 checkstyle 2m 12s trunk passed +1 mvnsite 2m 52s trunk passed +1 shadedclient 15m 54s branch has no errors when building and testing our client artifacts. +1 findbugs 5m 1s trunk passed +1 javadoc 2m 18s trunk passed       Patch Compile Tests 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 2m 15s the patch passed +1 compile 12m 1s the patch passed +1 cc 12m 1s the patch passed +1 javac 12m 1s the patch passed -0 checkstyle 2m 15s root: The patch generated 24 new + 872 unchanged - 14 fixed = 896 total (was 886) +1 mvnsite 2m 47s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. +1 shadedclient 10m 15s patch has no errors when building and testing our client artifacts. +1 findbugs 5m 31s the patch passed +1 javadoc 2m 28s the patch passed       Other Tests +1 unit 8m 36s hadoop-common in the patch passed. +1 unit 1m 21s hadoop-hdfs-client in the patch passed. -1 unit 88m 34s hadoop-hdfs in the patch failed. +1 asflicense 0m 38s The patch does not generate ASF License warnings. 194m 57s Reason Tests Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure   hadoop.fs.TestUnbuffer Subsystem Report/Notes Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 JIRA Issue HADOOP-10768 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12897899/HADOOP-10768.008.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc xml uname Linux 09284222731d 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/patchprocess/precommit/personality/provided.sh git revision trunk / b1941b2 maven version: Apache Maven 3.3.9 Default Java 1.8.0_151 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13694/artifact/out/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13694/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13694/testReport/ Max. process+thread count 3634 (vs. ulimit of 5000) modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13694/console Powered by Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 12s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
                trunk Compile Tests
          0 mvndep 1m 32s Maven dependency ordering for branch
          +1 mvninstall 18m 41s trunk passed
          +1 compile 14m 58s trunk passed
          +1 checkstyle 2m 23s trunk passed
          +1 mvnsite 3m 29s trunk passed
          +1 shadedclient 16m 56s branch has no errors when building and testing our client artifacts.
          +1 findbugs 5m 23s trunk passed
          +1 javadoc 2m 19s trunk passed
                Patch Compile Tests
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 2m 15s the patch passed
          +1 compile 11m 1s the patch passed
          +1 cc 11m 1s the patch passed
          +1 javac 11m 1s the patch passed
          -0 checkstyle 2m 14s root: The patch generated 18 new + 879 unchanged - 8 fixed = 897 total (was 887)
          +1 mvnsite 2m 50s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 1s The patch has no ill-formed XML file.
          +1 shadedclient 9m 50s patch has no errors when building and testing our client artifacts.
          +1 findbugs 5m 36s the patch passed
          +1 javadoc 2m 19s the patch passed
                Other Tests
          +1 unit 7m 47s hadoop-common in the patch passed.
          +1 unit 1m 24s hadoop-hdfs-client in the patch passed.
          -1 unit 83m 1s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 33s The patch does not generate ASF License warnings.
          191m 39s



          Reason Tests
          Failed junit tests hadoop.tracing.TestTraceAdmin
            hadoop.hdfs.TestTrashWithSecureEncryptionZones
            hadoop.hdfs.TestReconstructStripedFile
            hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer
            hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs
            hadoop.hdfs.server.balancer.TestBalancer
            hadoop.hdfs.TestSecureEncryptionZoneWithKMS
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
            hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
            hadoop.hdfs.qjournal.TestSecureNNWithQJM
            hadoop.hdfs.server.namenode.TestSecureNameNode
            hadoop.fs.TestUnbuffer
            hadoop.hdfs.server.mover.TestMover



          Subsystem Report/Notes
          Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639
          JIRA Issue HADOOP-10768
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12897691/HADOOP-10768.007.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc xml
          uname Linux 73c56a195f3a 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/patchprocess/precommit/personality/provided.sh
          git revision trunk / c4c57b8
          maven version: Apache Maven 3.3.9
          Default Java 1.8.0_151
          findbugs v3.1.0-RC1
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13686/artifact/out/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13686/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13686/testReport/
          Max. process+thread count 3673 (vs. ulimit of 5000)
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13686/console
          Powered by Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 12s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.       trunk Compile Tests 0 mvndep 1m 32s Maven dependency ordering for branch +1 mvninstall 18m 41s trunk passed +1 compile 14m 58s trunk passed +1 checkstyle 2m 23s trunk passed +1 mvnsite 3m 29s trunk passed +1 shadedclient 16m 56s branch has no errors when building and testing our client artifacts. +1 findbugs 5m 23s trunk passed +1 javadoc 2m 19s trunk passed       Patch Compile Tests 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 2m 15s the patch passed +1 compile 11m 1s the patch passed +1 cc 11m 1s the patch passed +1 javac 11m 1s the patch passed -0 checkstyle 2m 14s root: The patch generated 18 new + 879 unchanged - 8 fixed = 897 total (was 887) +1 mvnsite 2m 50s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. +1 shadedclient 9m 50s patch has no errors when building and testing our client artifacts. +1 findbugs 5m 36s the patch passed +1 javadoc 2m 19s the patch passed       Other Tests +1 unit 7m 47s hadoop-common in the patch passed. +1 unit 1m 24s hadoop-hdfs-client in the patch passed. -1 unit 83m 1s hadoop-hdfs in the patch failed. +1 asflicense 0m 33s The patch does not generate ASF License warnings. 191m 39s Reason Tests Failed junit tests hadoop.tracing.TestTraceAdmin   hadoop.hdfs.TestTrashWithSecureEncryptionZones   hadoop.hdfs.TestReconstructStripedFile   hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer   hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs   hadoop.hdfs.server.balancer.TestBalancer   hadoop.hdfs.TestSecureEncryptionZoneWithKMS   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer   hadoop.hdfs.qjournal.TestSecureNNWithQJM   hadoop.hdfs.server.namenode.TestSecureNameNode   hadoop.fs.TestUnbuffer   hadoop.hdfs.server.mover.TestMover Subsystem Report/Notes Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 JIRA Issue HADOOP-10768 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12897691/HADOOP-10768.007.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc xml uname Linux 73c56a195f3a 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/patchprocess/precommit/personality/provided.sh git revision trunk / c4c57b8 maven version: Apache Maven 3.3.9 Default Java 1.8.0_151 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13686/artifact/out/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13686/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13686/testReport/ Max. process+thread count 3673 (vs. ulimit of 5000) modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13686/console Powered by Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          dapengsun Dapeng Sun added a comment -

          Here are my test commands:
          No SASL:
          java -cp `./bin/hadoop classpath` org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024
          SASL INTEGRITY:
          java -cp `./bin/hadoop classpath` org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a -q INTEGRITY
          SASL PRIVACY with AES:
          java -cp `./bin/hadoop classpath` org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a -q PRIVACY -f AES/CTR/NoPadding
          SASL PRIVACY with original DES:
          java -cp `./bin/hadoop classpath` org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a -q PRIVACY

          Show
          dapengsun Dapeng Sun added a comment - Here are my test commands: No SASL: java -cp `./bin/hadoop classpath` org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 SASL INTEGRITY: java -cp `./bin/hadoop classpath` org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a -q INTEGRITY SASL PRIVACY with AES: java -cp `./bin/hadoop classpath` org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a -q PRIVACY -f AES/CTR/NoPadding SASL PRIVACY with original DES: java -cp `./bin/hadoop classpath` org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a -q PRIVACY
          Hide
          dapengsun Dapeng Sun added a comment - - edited

          Hi, Daryn Sharp, Aaron T. Myers, I have updated the patch, please help to review if you time allow.

          Add some unit tests and results of measuring the performance improvement

          I have finished the initial Benchmark with RPC Call Benchmark and also added two UTs in the latest patch, the benchmark result shows the patch improve ~4X on the throughput of RPC call, now the performance of PRIVACY is close to INTEGRITY’s, here is the detail data of total calls per second:

          SASL r:4 c:1 s:1 m:1024 r:4 c:30 s:30 m:1024
          NONE 17102 233726
          INTEGRITY 11693 114303
          PRIVACY (AES with HadoopOpensslCodec) 9194 112763
          PRIVACY (AES with HadoopJceCodec) 7718 111807
          PRIVACY (Original DES) 1872 34007

          Why not use javax cipher libraries? Any number of ciphers could be used now and in the future w/o code change. The aes ciphers are supposed to use aes-ni intrinsics when available.

          JDK cipher doesn’t take advantage of Intel AES-NI very well until JDK 9 (JDK-8143925 Improve JDK 9 AES-CTR with 4~6x gain), so I think using Crypto Stream in Hadoop should be a good choice.

          Should use a custom sasl client/server that delegates to the actual sasl instance. The ipc layer changes would be minimal and easier to maintain.

          It is hard to separate current logic to independent sasl client/server with minimal changes, I have move the logic to separate method for minimizing the changes.

          The cipher options appears to be present in every packet. If so, it should only be in the negotiate/initiate messages.

          The cipher option only appears in SASL negotiate/initiate.

          Thanks.

          Show
          dapengsun Dapeng Sun added a comment - - edited Hi, Daryn Sharp , Aaron T. Myers , I have updated the patch, please help to review if you time allow. Add some unit tests and results of measuring the performance improvement I have finished the initial Benchmark with RPC Call Benchmark and also added two UTs in the latest patch, the benchmark result shows the patch improve ~4X on the throughput of RPC call, now the performance of PRIVACY is close to INTEGRITY’s, here is the detail data of total calls per second: SASL r:4 c:1 s:1 m:1024 r:4 c:30 s:30 m:1024 NONE 17102 233726 INTEGRITY 11693 114303 PRIVACY (AES with HadoopOpensslCodec) 9194 112763 PRIVACY (AES with HadoopJceCodec) 7718 111807 PRIVACY (Original DES) 1872 34007 Why not use javax cipher libraries? Any number of ciphers could be used now and in the future w/o code change. The aes ciphers are supposed to use aes-ni intrinsics when available. JDK cipher doesn’t take advantage of Intel AES-NI very well until JDK 9 ( JDK-8143925 Improve JDK 9 AES-CTR with 4~6x gain), so I think using Crypto Stream in Hadoop should be a good choice. Should use a custom sasl client/server that delegates to the actual sasl instance. The ipc layer changes would be minimal and easier to maintain. It is hard to separate current logic to independent sasl client/server with minimal changes, I have move the logic to separate method for minimizing the changes. The cipher options appears to be present in every packet. If so, it should only be in the negotiate/initiate messages. The cipher option only appears in SASL negotiate/initiate. Thanks.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 9s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
                trunk Compile Tests
          0 mvndep 0m 15s Maven dependency ordering for branch
          +1 mvninstall 16m 2s trunk passed
          +1 compile 11m 46s trunk passed
          +1 checkstyle 2m 13s trunk passed
          +1 mvnsite 2m 50s trunk passed
          +1 shadedclient 15m 21s branch has no errors when building and testing our client artifacts.
          +1 findbugs 4m 53s trunk passed
          +1 javadoc 2m 17s trunk passed
                Patch Compile Tests
          0 mvndep 0m 16s Maven dependency ordering for patch
          +1 mvninstall 2m 14s the patch passed
          +1 compile 11m 35s the patch passed
          +1 cc 11m 35s the patch passed
          +1 javac 11m 35s the patch passed
          -0 checkstyle 2m 20s root: The patch generated 18 new + 877 unchanged - 9 fixed = 895 total (was 886)
          +1 mvnsite 3m 0s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 1s The patch has no ill-formed XML file.
          +1 shadedclient 9m 50s patch has no errors when building and testing our client artifacts.
          +1 findbugs 5m 25s the patch passed
          +1 javadoc 2m 24s the patch passed
                Other Tests
          +1 unit 8m 32s hadoop-common in the patch passed.
          +1 unit 1m 20s hadoop-hdfs-client in the patch passed.
          -1 unit 82m 1s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 33s The patch does not generate ASF License warnings.
          182m 42s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM
            hadoop.hdfs.server.federation.metrics.TestFederationMetrics
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
            hadoop.fs.TestUnbuffer



          Subsystem Report/Notes
          Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639
          JIRA Issue HADOOP-10768
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12897294/HADOOP-10768.006.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc xml
          uname Linux b8010396bd22 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/patchprocess/precommit/personality/provided.sh
          git revision trunk / 3e26077
          maven version: Apache Maven 3.3.9
          Default Java 1.8.0_151
          findbugs v3.1.0-RC1
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13666/artifact/out/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13666/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13666/testReport/
          Max. process+thread count 3861 (vs. ulimit of 5000)
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13666/console
          Powered by Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 9s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.       trunk Compile Tests 0 mvndep 0m 15s Maven dependency ordering for branch +1 mvninstall 16m 2s trunk passed +1 compile 11m 46s trunk passed +1 checkstyle 2m 13s trunk passed +1 mvnsite 2m 50s trunk passed +1 shadedclient 15m 21s branch has no errors when building and testing our client artifacts. +1 findbugs 4m 53s trunk passed +1 javadoc 2m 17s trunk passed       Patch Compile Tests 0 mvndep 0m 16s Maven dependency ordering for patch +1 mvninstall 2m 14s the patch passed +1 compile 11m 35s the patch passed +1 cc 11m 35s the patch passed +1 javac 11m 35s the patch passed -0 checkstyle 2m 20s root: The patch generated 18 new + 877 unchanged - 9 fixed = 895 total (was 886) +1 mvnsite 3m 0s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. +1 shadedclient 9m 50s patch has no errors when building and testing our client artifacts. +1 findbugs 5m 25s the patch passed +1 javadoc 2m 24s the patch passed       Other Tests +1 unit 8m 32s hadoop-common in the patch passed. +1 unit 1m 20s hadoop-hdfs-client in the patch passed. -1 unit 82m 1s hadoop-hdfs in the patch failed. +1 asflicense 0m 33s The patch does not generate ASF License warnings. 182m 42s Reason Tests Failed junit tests hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM   hadoop.hdfs.server.federation.metrics.TestFederationMetrics   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.fs.TestUnbuffer Subsystem Report/Notes Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 JIRA Issue HADOOP-10768 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12897294/HADOOP-10768.006.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc xml uname Linux b8010396bd22 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/patchprocess/precommit/personality/provided.sh git revision trunk / 3e26077 maven version: Apache Maven 3.3.9 Default Java 1.8.0_151 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13666/artifact/out/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13666/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13666/testReport/ Max. process+thread count 3861 (vs. ulimit of 5000) modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13666/console Powered by Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 10s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
                trunk Compile Tests
          0 mvndep 1m 44s Maven dependency ordering for branch
          +1 mvninstall 17m 37s trunk passed
          +1 compile 13m 23s trunk passed
          +1 checkstyle 2m 13s trunk passed
          +1 mvnsite 2m 56s trunk passed
          +1 shadedclient 15m 38s branch has no errors when building and testing our client artifacts.
          +1 findbugs 5m 31s trunk passed
          +1 javadoc 2m 21s trunk passed
                Patch Compile Tests
          0 mvndep 0m 18s Maven dependency ordering for patch
          +1 mvninstall 2m 35s the patch passed
          +1 compile 11m 39s the patch passed
          +1 cc 11m 39s the patch passed
          +1 javac 11m 39s the patch passed
          -0 checkstyle 2m 15s root: The patch generated 18 new + 870 unchanged - 9 fixed = 888 total (was 879)
          +1 mvnsite 3m 10s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 1s The patch has no ill-formed XML file.
          +1 shadedclient 9m 54s patch has no errors when building and testing our client artifacts.
          +1 findbugs 5m 19s the patch passed
          +1 javadoc 2m 16s the patch passed
                Other Tests
          -1 unit 7m 52s hadoop-common in the patch failed.
          +1 unit 1m 21s hadoop-hdfs-client in the patch passed.
          -1 unit 86m 18s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 33s The patch does not generate ASF License warnings.
          192m 22s



          Reason Tests
          Unreaped Processes hadoop-hdfs:1
          Failed junit tests hadoop.ipc.TestRPCCallBenchmark
            hadoop.fs.TestUnbuffer
          Timed out junit tests org.apache.hadoop.hdfs.TestLeaseRecovery2



          Subsystem Report/Notes
          Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639
          JIRA Issue HADOOP-10768
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12897275/HADOOP-10768.005.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc xml
          uname Linux 0d1cfe3dd04b 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/patchprocess/precommit/personality/provided.sh
          git revision trunk / 3e26077
          maven version: Apache Maven 3.3.9
          Default Java 1.8.0_151
          findbugs v3.1.0-RC1
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13663/artifact/out/diff-checkstyle-root.txt
          Unreaped Processes Log https://builds.apache.org/job/PreCommit-HADOOP-Build/13663/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-reaper.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13663/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13663/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13663/testReport/
          Max. process+thread count 3929 (vs. ulimit of 5000)
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13663/console
          Powered by Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 10s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.       trunk Compile Tests 0 mvndep 1m 44s Maven dependency ordering for branch +1 mvninstall 17m 37s trunk passed +1 compile 13m 23s trunk passed +1 checkstyle 2m 13s trunk passed +1 mvnsite 2m 56s trunk passed +1 shadedclient 15m 38s branch has no errors when building and testing our client artifacts. +1 findbugs 5m 31s trunk passed +1 javadoc 2m 21s trunk passed       Patch Compile Tests 0 mvndep 0m 18s Maven dependency ordering for patch +1 mvninstall 2m 35s the patch passed +1 compile 11m 39s the patch passed +1 cc 11m 39s the patch passed +1 javac 11m 39s the patch passed -0 checkstyle 2m 15s root: The patch generated 18 new + 870 unchanged - 9 fixed = 888 total (was 879) +1 mvnsite 3m 10s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. +1 shadedclient 9m 54s patch has no errors when building and testing our client artifacts. +1 findbugs 5m 19s the patch passed +1 javadoc 2m 16s the patch passed       Other Tests -1 unit 7m 52s hadoop-common in the patch failed. +1 unit 1m 21s hadoop-hdfs-client in the patch passed. -1 unit 86m 18s hadoop-hdfs in the patch failed. +1 asflicense 0m 33s The patch does not generate ASF License warnings. 192m 22s Reason Tests Unreaped Processes hadoop-hdfs:1 Failed junit tests hadoop.ipc.TestRPCCallBenchmark   hadoop.fs.TestUnbuffer Timed out junit tests org.apache.hadoop.hdfs.TestLeaseRecovery2 Subsystem Report/Notes Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 JIRA Issue HADOOP-10768 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12897275/HADOOP-10768.005.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc xml uname Linux 0d1cfe3dd04b 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/patchprocess/precommit/personality/provided.sh git revision trunk / 3e26077 maven version: Apache Maven 3.3.9 Default Java 1.8.0_151 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13663/artifact/out/diff-checkstyle-root.txt Unreaped Processes Log https://builds.apache.org/job/PreCommit-HADOOP-Build/13663/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-reaper.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13663/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13663/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13663/testReport/ Max. process+thread count 3929 (vs. ulimit of 5000) modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13663/console Powered by Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 10s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
                trunk Compile Tests
          0 mvndep 1m 46s Maven dependency ordering for branch
          +1 mvninstall 16m 52s trunk passed
          +1 compile 12m 36s trunk passed
          +1 checkstyle 2m 11s trunk passed
          +1 mvnsite 3m 1s trunk passed
          +1 shadedclient 15m 28s branch has no errors when building and testing our client artifacts.
          +1 findbugs 5m 19s trunk passed
          +1 javadoc 2m 28s trunk passed
                Patch Compile Tests
          0 mvndep 0m 16s Maven dependency ordering for patch
          +1 mvninstall 2m 25s the patch passed
          +1 compile 11m 48s the patch passed
          +1 cc 11m 48s the patch passed
          -1 javac 11m 48s root generated 1 new + 1234 unchanged - 0 fixed = 1235 total (was 1234)
          -0 checkstyle 2m 15s root: The patch generated 82 new + 873 unchanged - 7 fixed = 955 total (was 880)
          +1 mvnsite 2m 58s the patch passed
          -1 whitespace 0m 0s The patch has 20 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 xml 0m 1s The patch has no ill-formed XML file.
          +1 shadedclient 9m 48s patch has no errors when building and testing our client artifacts.
          +1 findbugs 5m 37s the patch passed
          +1 javadoc 2m 17s the patch passed
                Other Tests
          +1 unit 8m 14s hadoop-common in the patch passed.
          +1 unit 1m 25s hadoop-hdfs-client in the patch passed.
          -1 unit 95m 19s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 40s The patch does not generate ASF License warnings.
          200m 7s



          Reason Tests
          Unreaped Processes hadoop-hdfs:5
          Failed junit tests hadoop.hdfs.TestFileCreationDelete
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
            hadoop.hdfs.TestErasureCodeBenchmarkThroughput
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010
            hadoop.hdfs.TestMaintenanceState
            hadoop.hdfs.TestAbandonBlock
            hadoop.hdfs.TestFileCorruption
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
            hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData
            hadoop.hdfs.server.balancer.TestBalancerRPCDelay
            hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean
            hadoop.fs.TestUnbuffer
            hadoop.hdfs.TestSetTimes



          Subsystem Report/Notes
          Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639
          JIRA Issue HADOOP-10768
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12897029/HADOOP-10768.004.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc xml
          uname Linux ecb399d1ddb5 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/patchprocess/precommit/personality/provided.sh
          git revision trunk / 8a1bd9a
          maven version: Apache Maven 3.3.9
          Default Java 1.8.0_151
          findbugs v3.1.0-RC1
          javac https://builds.apache.org/job/PreCommit-HADOOP-Build/13659/artifact/out/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13659/artifact/out/diff-checkstyle-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/13659/artifact/out/whitespace-eol.txt
          Unreaped Processes Log https://builds.apache.org/job/PreCommit-HADOOP-Build/13659/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-reaper.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13659/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13659/testReport/
          Max. process+thread count 2813 (vs. ulimit of 5000)
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13659/console
          Powered by Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 10s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.       trunk Compile Tests 0 mvndep 1m 46s Maven dependency ordering for branch +1 mvninstall 16m 52s trunk passed +1 compile 12m 36s trunk passed +1 checkstyle 2m 11s trunk passed +1 mvnsite 3m 1s trunk passed +1 shadedclient 15m 28s branch has no errors when building and testing our client artifacts. +1 findbugs 5m 19s trunk passed +1 javadoc 2m 28s trunk passed       Patch Compile Tests 0 mvndep 0m 16s Maven dependency ordering for patch +1 mvninstall 2m 25s the patch passed +1 compile 11m 48s the patch passed +1 cc 11m 48s the patch passed -1 javac 11m 48s root generated 1 new + 1234 unchanged - 0 fixed = 1235 total (was 1234) -0 checkstyle 2m 15s root: The patch generated 82 new + 873 unchanged - 7 fixed = 955 total (was 880) +1 mvnsite 2m 58s the patch passed -1 whitespace 0m 0s The patch has 20 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 xml 0m 1s The patch has no ill-formed XML file. +1 shadedclient 9m 48s patch has no errors when building and testing our client artifacts. +1 findbugs 5m 37s the patch passed +1 javadoc 2m 17s the patch passed       Other Tests +1 unit 8m 14s hadoop-common in the patch passed. +1 unit 1m 25s hadoop-hdfs-client in the patch passed. -1 unit 95m 19s hadoop-hdfs in the patch failed. +1 asflicense 0m 40s The patch does not generate ASF License warnings. 200m 7s Reason Tests Unreaped Processes hadoop-hdfs:5 Failed junit tests hadoop.hdfs.TestFileCreationDelete   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070   hadoop.hdfs.TestErasureCodeBenchmarkThroughput   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010   hadoop.hdfs.TestMaintenanceState   hadoop.hdfs.TestAbandonBlock   hadoop.hdfs.TestFileCorruption   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060   hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData   hadoop.hdfs.server.balancer.TestBalancerRPCDelay   hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean   hadoop.fs.TestUnbuffer   hadoop.hdfs.TestSetTimes Subsystem Report/Notes Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 JIRA Issue HADOOP-10768 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12897029/HADOOP-10768.004.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc xml uname Linux ecb399d1ddb5 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/patchprocess/precommit/personality/provided.sh git revision trunk / 8a1bd9a maven version: Apache Maven 3.3.9 Default Java 1.8.0_151 findbugs v3.1.0-RC1 javac https://builds.apache.org/job/PreCommit-HADOOP-Build/13659/artifact/out/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13659/artifact/out/diff-checkstyle-root.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/13659/artifact/out/whitespace-eol.txt Unreaped Processes Log https://builds.apache.org/job/PreCommit-HADOOP-Build/13659/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-reaper.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13659/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13659/testReport/ Max. process+thread count 2813 (vs. ulimit of 5000) modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13659/console Powered by Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          dapengsun Dapeng Sun added a comment -

          Discussed with Dian Fu, I would like to pick up this JIRA. I will uploaded a new patch when I finished.

          Show
          dapengsun Dapeng Sun added a comment - Discussed with Dian Fu , I would like to pick up this JIRA. I will uploaded a new patch when I finished.
          Hide
          dian.fu Dian Fu added a comment -

          Hi, Daryn Sharp, Aaron T. Myers, very sorry for the late response. I'm afraid that currently I have no bandwidth to continue the work of this ticket and it would be great if someone could take over it.

          Show
          dian.fu Dian Fu added a comment - Hi, Daryn Sharp , Aaron T. Myers , very sorry for the late response. I'm afraid that currently I have no bandwidth to continue the work of this ticket and it would be great if someone could take over it.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 11s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
                trunk Compile Tests
          0 mvndep 1m 23s Maven dependency ordering for branch
          +1 mvninstall 13m 23s trunk passed
          +1 compile 12m 15s trunk passed
          +1 checkstyle 2m 3s trunk passed
          +1 mvnsite 3m 0s trunk passed
          +1 shadedclient 13m 46s branch has no errors when building and testing our client artifacts.
          +1 findbugs 6m 2s trunk passed
          +1 javadoc 2m 40s trunk passed
                Patch Compile Tests
          0 mvndep 0m 18s Maven dependency ordering for patch
          +1 mvninstall 2m 41s the patch passed
          +1 compile 13m 26s the patch passed
          +1 cc 13m 26s the patch passed
          +1 javac 13m 26s the patch passed
          -0 checkstyle 2m 19s root: The patch generated 14 new + 640 unchanged - 5 fixed = 654 total (was 645)
          +1 mvnsite 3m 13s the patch passed
          -1 whitespace 0m 0s The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 xml 0m 1s The patch has no ill-formed XML file.
          +1 shadedclient 8m 28s patch has no errors when building and testing our client artifacts.
          +1 findbugs 6m 0s the patch passed
          +1 javadoc 2m 13s the patch passed
                Other Tests
          -1 unit 8m 58s hadoop-common in the patch failed.
          +1 unit 1m 21s hadoop-hdfs-client in the patch passed.
          -1 unit 110m 33s hadoop-hdfs in the patch failed.
          -1 asflicense 0m 30s The patch generated 3 ASF License warnings.
          212m 7s



          Reason Tests
          Failed junit tests hadoop.security.TestRaceWhenRelogin
            hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
            hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
            hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean



          Subsystem Report/Notes
          Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639
          JIRA Issue HADOOP-10768
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12894494/HADOOP-10768.003.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc xml
          uname Linux 3c8b5fc0b926 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/patchprocess/precommit/personality/provided.sh
          git revision trunk / 8be5707
          maven version: Apache Maven 3.3.9
          Default Java 1.8.0_131
          findbugs v3.1.0-RC1
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13591/artifact/out/diff-checkstyle-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/13591/artifact/out/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13591/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13591/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13591/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HADOOP-Build/13591/artifact/out/patch-asflicense-problems.txt
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13591/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 11s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       trunk Compile Tests 0 mvndep 1m 23s Maven dependency ordering for branch +1 mvninstall 13m 23s trunk passed +1 compile 12m 15s trunk passed +1 checkstyle 2m 3s trunk passed +1 mvnsite 3m 0s trunk passed +1 shadedclient 13m 46s branch has no errors when building and testing our client artifacts. +1 findbugs 6m 2s trunk passed +1 javadoc 2m 40s trunk passed       Patch Compile Tests 0 mvndep 0m 18s Maven dependency ordering for patch +1 mvninstall 2m 41s the patch passed +1 compile 13m 26s the patch passed +1 cc 13m 26s the patch passed +1 javac 13m 26s the patch passed -0 checkstyle 2m 19s root: The patch generated 14 new + 640 unchanged - 5 fixed = 654 total (was 645) +1 mvnsite 3m 13s the patch passed -1 whitespace 0m 0s The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 xml 0m 1s The patch has no ill-formed XML file. +1 shadedclient 8m 28s patch has no errors when building and testing our client artifacts. +1 findbugs 6m 0s the patch passed +1 javadoc 2m 13s the patch passed       Other Tests -1 unit 8m 58s hadoop-common in the patch failed. +1 unit 1m 21s hadoop-hdfs-client in the patch passed. -1 unit 110m 33s hadoop-hdfs in the patch failed. -1 asflicense 0m 30s The patch generated 3 ASF License warnings. 212m 7s Reason Tests Failed junit tests hadoop.security.TestRaceWhenRelogin   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170   hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean Subsystem Report/Notes Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 JIRA Issue HADOOP-10768 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12894494/HADOOP-10768.003.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc xml uname Linux 3c8b5fc0b926 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/patchprocess/precommit/personality/provided.sh git revision trunk / 8be5707 maven version: Apache Maven 3.3.9 Default Java 1.8.0_131 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13591/artifact/out/diff-checkstyle-root.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/13591/artifact/out/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13591/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13591/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13591/testReport/ asflicense https://builds.apache.org/job/PreCommit-HADOOP-Build/13591/artifact/out/patch-asflicense-problems.txt modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13591/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          atm Aaron T. Myers added a comment -

          I was taking a look at this JIRA and noticed that the latest patch no longer applies to trunk. Attaching an updated patch which should apply cleanly, though I've made no other changes besides what was strictly required.

          Dian Fu - are you still interested in working on this JIRA? If so, I think the next step is to address Daryn Sharp's feedback, all of which I think make sense. Additionally, I think the patch should add some tests beyond just TestPBHelper, and I'd also love to see some results of measuring the performance improvement provided by this patch.

          Show
          atm Aaron T. Myers added a comment - I was taking a look at this JIRA and noticed that the latest patch no longer applies to trunk. Attaching an updated patch which should apply cleanly, though I've made no other changes besides what was strictly required. Dian Fu - are you still interested in working on this JIRA? If so, I think the next step is to address Daryn Sharp 's feedback, all of which I think make sense. Additionally, I think the patch should add some tests beyond just TestPBHelper , and I'd also love to see some results of measuring the performance improvement provided by this patch.
          Hide
          daryn Daryn Sharp added a comment -

          Ok. The patch does appear to encrypt at the packet level, this is good. Preliminary comments:

          1. The cipher options appears to be present in every packet. If so, it should only be in the negotiate/initiate messages.
          2. Should use a custom sasl client/server that delegates to the actual sasl instance. The ipc layer changes would be minimal and easier to maintain.
          3. Why not use javax cipher libraries? Any number of ciphers could be used now and in the future w/o code change. The aes ciphers are supposed to use aes-ni intrinsics when available.
          Show
          daryn Daryn Sharp added a comment - Ok. The patch does appear to encrypt at the packet level, this is good. Preliminary comments: The cipher options appears to be present in every packet. If so, it should only be in the negotiate/initiate messages. Should use a custom sasl client/server that delegates to the actual sasl instance. The ipc layer changes would be minimal and easier to maintain. Why not use javax cipher libraries? Any number of ciphers could be used now and in the future w/o code change. The aes ciphers are supposed to use aes-ni intrinsics when available.
          Hide
          daryn Daryn Sharp added a comment -

          I specifically ensured the rpcv9 protocol (very early 2.x releases) is designed to support a rpc proxy to reduce connections for instance to the NN. Ex. Every rpc packet is framed so a proxy can mux/demux the packets to clients even if encryption is used. I know the sasl wrap/unwrap path is expensive but haven't had the cycles to improve it.

          Adding encryption to the entire stream will negate the proxy capability which is something I think will soon be needed with very large clusters. -1 if that's what this patch does. I'll review shortly.

          Show
          daryn Daryn Sharp added a comment - I specifically ensured the rpcv9 protocol (very early 2.x releases) is designed to support a rpc proxy to reduce connections for instance to the NN. Ex. Every rpc packet is framed so a proxy can mux/demux the packets to clients even if encryption is used. I know the sasl wrap/unwrap path is expensive but haven't had the cycles to improve it. Adding encryption to the entire stream will negate the proxy capability which is something I think will soon be needed with very large clusters. -1 if that's what this patch does. I'll review shortly.
          Hide
          drankye Kai Zheng added a comment -

          This actually bypasses the low efficient SASL.wrap/unwrap operations by providing an extra Hadoop layer above, it should be mostly flexible for Hadoop. A further consideration is how to make the layer look good and also available for the ecosystem since other projects like HBase doesn't use Hadoop IPC.

          Any thoughts?

          Show
          drankye Kai Zheng added a comment - This actually bypasses the low efficient SASL.wrap/unwrap operations by providing an extra Hadoop layer above, it should be mostly flexible for Hadoop. A further consideration is how to make the layer look good and also available for the ecosystem since other projects like HBase doesn't use Hadoop IPC. Any thoughts?
          Hide
          drankye Kai Zheng added a comment -

          Thanks for the design doc and clarifying. It looks good work, Dian Fu!

          Comments about the doc:

          • It would be good to clearly say: this builds application layer data encryption ABOVE SASL (not mixed or not in the same layer of SASL). So accordingly, you can simplify your flow picture very much, by reducing it into only two steps: 1) SASL handshake; 2) Hadoop data encryption cipher negotiation. The illustrated 7 steps for SASL may be specific to GSSAPI, for others it may be much simpler, anyhow we don't need to show it here.
          • Why need to have SaslCryptoCodec? What it does? Maybe after separate encryption negotiation is complete, we can create CryptoOutputStream directly?
          • Since we're going in the same approach with data transfer encryption, both doing separate encryption cipher negotiation and data encryption after and above SASL, one being for file data, the other for RPC data, maybe we can mostly reuse the existing work? Did we go this way in implementation or is there any difference?
          • How the encryption key(s) is negotiated or determined? Do it consider the established session key from SASL if available? It seems to produce a key pair and how the two keys are used?
          • Do we hard-code the AES cipher to be AES/CTR mode? Guess other mode like AES/GCM can also be used.
          Show
          drankye Kai Zheng added a comment - Thanks for the design doc and clarifying. It looks good work, Dian Fu ! Comments about the doc: It would be good to clearly say: this builds application layer data encryption ABOVE SASL (not mixed or not in the same layer of SASL). So accordingly, you can simplify your flow picture very much, by reducing it into only two steps: 1) SASL handshake; 2) Hadoop data encryption cipher negotiation. The illustrated 7 steps for SASL may be specific to GSSAPI, for others it may be much simpler, anyhow we don't need to show it here. Why need to have SaslCryptoCodec ? What it does? Maybe after separate encryption negotiation is complete, we can create CryptoOutputStream directly? Since we're going in the same approach with data transfer encryption, both doing separate encryption cipher negotiation and data encryption after and above SASL, one being for file data, the other for RPC data, maybe we can mostly reuse the existing work? Did we go this way in implementation or is there any difference? How the encryption key(s) is negotiated or determined? Do it consider the established session key from SASL if available? It seems to produce a key pair and how the two keys are used? Do we hard-code the AES cipher to be AES/CTR mode? Guess other mode like AES/GCM can also be used.
          Hide
          dian.fu Dian Fu added a comment -

          Hi Kai Zheng,
          Thanks a lot for your comments. Have attached the design doc.

          I guess it's all about and for performance. Do you have any number to share?

          Correct. It's all about performance. I will post the performance data later today.

          What's the impact? Does it mean to upgrade RPC version? Can external clients still be able to talk with the server via SASL? How this affect downstream components?

          The patch is backward compatible and so downstream components won't be affected. So I think there is no need to upgrade RPC version as well.

          Looks like the work is mainly in SASL layer, when Kerberos is enabled, will it still favor the GSSAPI mechanism? If not or it's bypassed, what encryption key is used and how it's obtained?

          It still relies on GSSAPI mechanism or DIGEST-MD5 mechanism to do authentication. At the end of the original SASL handshake, a pair of encryption keys will be generated by RPC server randomly and sent to RPC client via the secure SASL channel.

          Would you break it down? This one can be the umbrella.

          Good advice. I will split the patch to ease the review.

          Show
          dian.fu Dian Fu added a comment - Hi Kai Zheng , Thanks a lot for your comments. Have attached the design doc. I guess it's all about and for performance. Do you have any number to share? Correct. It's all about performance. I will post the performance data later today. What's the impact? Does it mean to upgrade RPC version? Can external clients still be able to talk with the server via SASL? How this affect downstream components? The patch is backward compatible and so downstream components won't be affected. So I think there is no need to upgrade RPC version as well. Looks like the work is mainly in SASL layer, when Kerberos is enabled, will it still favor the GSSAPI mechanism? If not or it's bypassed, what encryption key is used and how it's obtained? It still relies on GSSAPI mechanism or DIGEST-MD5 mechanism to do authentication. At the end of the original SASL handshake, a pair of encryption keys will be generated by RPC server randomly and sent to RPC client via the secure SASL channel. Would you break it down? This one can be the umbrella. Good advice. I will split the patch to ease the review.
          Hide
          drankye Kai Zheng added a comment -

          Thanks Dian Fu for working on and attacking this!
          I only did a quick look at the work. So far some questions in high level:

          • Would you have a design doc that describes the requirement, the approach? I understand this was well discussed in the past, but guess a doc like this may be good to summarize and bring fresh discussion.
          • I guess it's all about and for performance. Do you have any number to share?
          • What's the impact? Does it mean to upgrade RPC version? Can external clients still be able to talk with the server via SASL? How this affect downstream components?
          • Looks like the work is mainly in SASL layer, when Kerberos is enabled, will it still favor the GSSAPI mechanism? If not or it's bypassed, what encryption key is used and how it's obtained?
          • The patch looks rather large, the change covering crypto, protocol, sasl rpc client and server, data transfer and some misc. Would you break it down? This one can be the umbrella.

          Thanks again!

          Show
          drankye Kai Zheng added a comment - Thanks Dian Fu for working on and attacking this! I only did a quick look at the work. So far some questions in high level: Would you have a design doc that describes the requirement, the approach? I understand this was well discussed in the past, but guess a doc like this may be good to summarize and bring fresh discussion. I guess it's all about and for performance. Do you have any number to share? What's the impact? Does it mean to upgrade RPC version? Can external clients still be able to talk with the server via SASL? How this affect downstream components? Looks like the work is mainly in SASL layer, when Kerberos is enabled, will it still favor the GSSAPI mechanism? If not or it's bypassed, what encryption key is used and how it's obtained? The patch looks rather large, the change covering crypto, protocol, sasl rpc client and server, data transfer and some misc. Would you break it down? This one can be the umbrella. Thanks again!
          Hide
          dian.fu Dian Fu added a comment -

          Updated the patch to fix the checkstyle issues. The test failures aren't relate to this patch. The failure of TestShortCircuitLocalRead is cased by HADOOP-12994. Other test failures have passed in my local environment.

          Show
          dian.fu Dian Fu added a comment - Updated the patch to fix the checkstyle issues. The test failures aren't relate to this patch. The failure of TestShortCircuitLocalRead is cased by HADOOP-12994 . Other test failures have passed in my local environment.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 11s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 15s Maven dependency ordering for branch
          +1 mvninstall 6m 37s trunk passed
          +1 compile 5m 50s trunk passed with JDK v1.8.0_77
          +1 compile 6m 42s trunk passed with JDK v1.7.0_95
          +1 checkstyle 1m 13s trunk passed
          +1 mvnsite 2m 22s trunk passed
          +1 mvneclipse 0m 40s trunk passed
          +1 findbugs 5m 12s trunk passed
          +1 javadoc 2m 18s trunk passed with JDK v1.8.0_77
          +1 javadoc 3m 15s trunk passed with JDK v1.7.0_95
          0 mvndep 0m 14s Maven dependency ordering for patch
          +1 mvninstall 1m 58s the patch passed
          +1 compile 5m 42s the patch passed with JDK v1.8.0_77
          +1 cc 5m 42s the patch passed
          +1 javac 5m 42s the patch passed
          +1 compile 6m 36s the patch passed with JDK v1.7.0_95
          +1 cc 6m 36s the patch passed
          +1 javac 6m 36s the patch passed
          -1 checkstyle 1m 15s root: patch generated 52 new + 662 unchanged - 5 fixed = 714 total (was 667)
          +1 mvnsite 2m 21s the patch passed
          +1 mvneclipse 0m 41s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 xml 0m 1s The patch has no ill-formed XML file.
          +1 findbugs 5m 47s the patch passed
          +1 javadoc 2m 16s the patch passed with JDK v1.8.0_77
          +1 javadoc 3m 12s the patch passed with JDK v1.7.0_95
          -1 unit 6m 38s hadoop-common in the patch failed with JDK v1.8.0_77.
          +1 unit 0m 50s hadoop-hdfs-client in the patch passed with JDK v1.8.0_77.
          -1 unit 67m 25s hadoop-hdfs in the patch failed with JDK v1.8.0_77.
          +1 unit 8m 2s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 unit 1m 2s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95.
          -1 unit 64m 18s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          +1 asflicense 0m 27s Patch does not generate ASF License warnings.
          214m 59s



          Reason Tests
          JDK v1.8.0_77 Failed junit tests hadoop.metrics2.impl.TestGangliaMetrics
            hadoop.hdfs.TestHFlush
            hadoop.hdfs.server.datanode.TestDataNodeMetrics
            hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
            hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead
            hadoop.hdfs.server.datanode.TestFsDatasetCache
            hadoop.hdfs.TestReadStripedFileWithMissingBlocks
            hadoop.hdfs.server.balancer.TestBalancer
          JDK v1.8.0_77 Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile
            org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
          JDK v1.7.0_95 Failed junit tests hadoop.hdfs.TestHFlush
            hadoop.hdfs.server.blockmanagement.TestNodeCount
            hadoop.hdfs.server.datanode.TestBlockScanner
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
            hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations
            hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead
            hadoop.hdfs.server.datanode.TestFsDatasetCache
            hadoop.hdfs.TestReadStripedFileWithMissingBlocks
            hadoop.hdfs.TestEncryptionZones
          JDK v1.7.0_95 Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile
            org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12798454/HADOOP-10768.001.patch
          JIRA Issue HADOOP-10768
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc xml
          uname Linux cf1e85acd556 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 35f0770
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 11s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 15s Maven dependency ordering for branch +1 mvninstall 6m 37s trunk passed +1 compile 5m 50s trunk passed with JDK v1.8.0_77 +1 compile 6m 42s trunk passed with JDK v1.7.0_95 +1 checkstyle 1m 13s trunk passed +1 mvnsite 2m 22s trunk passed +1 mvneclipse 0m 40s trunk passed +1 findbugs 5m 12s trunk passed +1 javadoc 2m 18s trunk passed with JDK v1.8.0_77 +1 javadoc 3m 15s trunk passed with JDK v1.7.0_95 0 mvndep 0m 14s Maven dependency ordering for patch +1 mvninstall 1m 58s the patch passed +1 compile 5m 42s the patch passed with JDK v1.8.0_77 +1 cc 5m 42s the patch passed +1 javac 5m 42s the patch passed +1 compile 6m 36s the patch passed with JDK v1.7.0_95 +1 cc 6m 36s the patch passed +1 javac 6m 36s the patch passed -1 checkstyle 1m 15s root: patch generated 52 new + 662 unchanged - 5 fixed = 714 total (was 667) +1 mvnsite 2m 21s the patch passed +1 mvneclipse 0m 41s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. +1 findbugs 5m 47s the patch passed +1 javadoc 2m 16s the patch passed with JDK v1.8.0_77 +1 javadoc 3m 12s the patch passed with JDK v1.7.0_95 -1 unit 6m 38s hadoop-common in the patch failed with JDK v1.8.0_77. +1 unit 0m 50s hadoop-hdfs-client in the patch passed with JDK v1.8.0_77. -1 unit 67m 25s hadoop-hdfs in the patch failed with JDK v1.8.0_77. +1 unit 8m 2s hadoop-common in the patch passed with JDK v1.7.0_95. +1 unit 1m 2s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. -1 unit 64m 18s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 27s Patch does not generate ASF License warnings. 214m 59s Reason Tests JDK v1.8.0_77 Failed junit tests hadoop.metrics2.impl.TestGangliaMetrics   hadoop.hdfs.TestHFlush   hadoop.hdfs.server.datanode.TestDataNodeMetrics   hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure   hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead   hadoop.hdfs.server.datanode.TestFsDatasetCache   hadoop.hdfs.TestReadStripedFileWithMissingBlocks   hadoop.hdfs.server.balancer.TestBalancer JDK v1.8.0_77 Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile   org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding JDK v1.7.0_95 Failed junit tests hadoop.hdfs.TestHFlush   hadoop.hdfs.server.blockmanagement.TestNodeCount   hadoop.hdfs.server.datanode.TestBlockScanner   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations   hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead   hadoop.hdfs.server.datanode.TestFsDatasetCache   hadoop.hdfs.TestReadStripedFileWithMissingBlocks   hadoop.hdfs.TestEncryptionZones JDK v1.7.0_95 Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile   org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12798454/HADOOP-10768.001.patch JIRA Issue HADOOP-10768 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc xml uname Linux cf1e85acd556 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 35f0770 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9079/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          dian.fu Dian Fu added a comment -

          As discussed with Yi Liu offline, I'd like to pick up this JIRA. Attach an initial patch for review.

          Show
          dian.fu Dian Fu added a comment - As discussed with Yi Liu offline, I'd like to pick up this JIRA. Attach an initial patch for review.
          Hide
          hitliuyi Yi Liu added a comment -

          I'm working on this. It seems for services with heavy RPC calls like NameNode, the performance degrades obviously if encryption is enabled.
          I will show performance benefits after the patch is ready.

          Show
          hitliuyi Yi Liu added a comment - I'm working on this. It seems for services with heavy RPC calls like NameNode, the performance degrades obviously if encryption is enabled. I will show performance benefits after the patch is ready.
          Hide
          hitliuyi Yi Liu added a comment -

          Andrew Purtell, thanks for your nice comments, you gave good ideas.

          Java's GSSAPI uses JCE ciphers for crypto support. Would it be possible to simply swap in an accelerated provider like Diceros?

          Right. We also try to make improvement for RPCs which don't use kerberos for authentication and data protection, maybe use delegation token and so on. About simply swapping in an accelerated provider, I'm still considering the detail, I'm intended to to resolve them together.

          On the other hand, whether to wrap payloads using the SASL client or server or not is an application decision. One could wrap the initial payloads with whatever encryption was negotiated during connection initiation until completing additional key exchange and negotiation steps, then switch to an alternate means of applying a symmetric cipher to RPC payloads.

          Right, I agree, it's a good idea.

          This is a similar issue we had/have with HBase write ahead log encryption, because we need to encrypt on a per-entry boundary for avoiding data loss during recovery, and each entry is small. You might think that small payloads mean we won't be able to increase throughput with accelerated crypto, and you would be right, but the accelerated crypto still reduces on CPU time substantially, with proportional reduction in latency introduced by cryptographic operations. I think for both the HBase WAL and Hadoop RPC, latency is a critical consideration.

          You have a point. I will also setup benchmark for this.

          Show
          hitliuyi Yi Liu added a comment - Andrew Purtell , thanks for your nice comments, you gave good ideas. Java's GSSAPI uses JCE ciphers for crypto support. Would it be possible to simply swap in an accelerated provider like Diceros? Right. We also try to make improvement for RPCs which don't use kerberos for authentication and data protection, maybe use delegation token and so on. About simply swapping in an accelerated provider, I'm still considering the detail, I'm intended to to resolve them together. On the other hand, whether to wrap payloads using the SASL client or server or not is an application decision. One could wrap the initial payloads with whatever encryption was negotiated during connection initiation until completing additional key exchange and negotiation steps, then switch to an alternate means of applying a symmetric cipher to RPC payloads. Right, I agree, it's a good idea. This is a similar issue we had/have with HBase write ahead log encryption, because we need to encrypt on a per-entry boundary for avoiding data loss during recovery, and each entry is small. You might think that small payloads mean we won't be able to increase throughput with accelerated crypto, and you would be right, but the accelerated crypto still reduces on CPU time substantially, with proportional reduction in latency introduced by cryptographic operations. I think for both the HBase WAL and Hadoop RPC, latency is a critical consideration. You have a point. I will also setup benchmark for this.
          Hide
          apurtell Andrew Purtell added a comment -

          Even GSSAPI supports using AES, but without AES-NI support by default, so the encryption is slow and will become bottleneck.

          Java's GSSAPI uses JCE ciphers for crypto support. Would it be possible to simply swap in an accelerated provider like Diceros?

          On the other hand, whether to wrap payloads using the SASL client or server or not is an application decision. One could wrap the initial payloads with whatever encryption was negotiated during connection initiation until completing additional key exchange and negotiation steps, then switch to an alternate means of applying a symmetric cipher to RPC payloads.

          On the other hand, RPC message is small

          This is a similar issue we had/have with HBase write ahead log encryption, because we need to encrypt on a per-entry boundary for avoiding data loss during recovery, and each entry is small. You might think that small payloads mean we won't be able to increase throughput with accelerated crypto, and you would be right, but the accelerated crypto still reduces on CPU time substantially, with proportional reduction in latency introduced by cryptographic operations. I think for both the HBase WAL and Hadoop RPC, latency is a critical consideration.

          Show
          apurtell Andrew Purtell added a comment - Even GSSAPI supports using AES, but without AES-NI support by default, so the encryption is slow and will become bottleneck. Java's GSSAPI uses JCE ciphers for crypto support. Would it be possible to simply swap in an accelerated provider like Diceros? On the other hand, whether to wrap payloads using the SASL client or server or not is an application decision. One could wrap the initial payloads with whatever encryption was negotiated during connection initiation until completing additional key exchange and negotiation steps, then switch to an alternate means of applying a symmetric cipher to RPC payloads. On the other hand, RPC message is small This is a similar issue we had/have with HBase write ahead log encryption, because we need to encrypt on a per-entry boundary for avoiding data loss during recovery, and each entry is small. You might think that small payloads mean we won't be able to increase throughput with accelerated crypto, and you would be right, but the accelerated crypto still reduces on CPU time substantially, with proportional reduction in latency introduced by cryptographic operations. I think for both the HBase WAL and Hadoop RPC, latency is a critical consideration.

            People

            • Assignee:
              dapengsun Dapeng Sun
              Reporter:
              hitliuyi Yi Liu
            • Votes:
              0 Vote for this issue
              Watchers:
              41 Start watching this issue

              Dates

              • Created:
                Updated:

                Development