Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-10312

Large block reports may fail to decode at NameNode due to 64 MB protobuf maximum length restriction.

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: namenode
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      Our RPC server caps the maximum size of incoming messages at 64 MB by default. For exceptional circumstances, this can be uptuned using ipc.maximum.data.length. However, for block reports, there is still an internal maximum length restriction of 64 MB enforced by protobuf. (Sample stack trace to follow in comments.) This issue proposes to apply the same override to our block list decoding, so that large block reports can proceed.

      1. HDFS-10312.001.patch
        21 kB
        Chris Nauroth
      2. HDFS-10312.002.patch
        21 kB
        Chris Nauroth
      3. HDFS-10312.003.patch
        23 kB
        Chris Nauroth
      4. HDFS-10312.004.patch
        23 kB
        Chris Nauroth

        Issue Links

          Activity

          Hide
          cnauroth Chris Nauroth added a comment -

          I saw this happen with a block report from a DataNode containing ~6 million blocks. All blocks were on a single data directory, so unfortunately, the block report splitting by storage didn't help. Here is a sample stack trace:

          org.apache.hadoop.ipc.RemoteException: java.lang.IllegalStateException: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase the size limit.
          	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:4404)
          	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1436)
          	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:173)
          	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:30059)
          	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:637)
          	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
          	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
          	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
          	at java.security.AccessController.doPrivileged(Native Method)
          	at javax.security.auth.Subject.doAs(Subject.java:415)
          	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1742)
          	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2417)
          Caused by: java.lang.IllegalStateException: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase the size limit.
          	at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:369)
          	at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:347)
          	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2478)
          	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2313)
          	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2121)
          	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$1.call(NameNodeRpcServer.java:1439)
          	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$1.call(NameNodeRpcServer.java:1436)
          	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
          	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4463)
          	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4442)
          Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase the size limit.
          	at com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
          	at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
          	at com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
          	at com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
          	at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:365)
          	... 9 more
          
          	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1443)
          	at org.apache.hadoop.ipc.Client.call(Client.java:1402)
          	at org.apache.hadoop.ipc.Client.call(Client.java:1352)
          	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
          	at com.sun.proxy.$Proxy21.blockReport(Unknown Source)
          	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:204)
          	at org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:86)
          

          This is an unusual situation, but we should provide a way for it to succeed.

          Show
          cnauroth Chris Nauroth added a comment - I saw this happen with a block report from a DataNode containing ~6 million blocks. All blocks were on a single data directory, so unfortunately, the block report splitting by storage didn't help. Here is a sample stack trace: org.apache.hadoop.ipc.RemoteException: java.lang.IllegalStateException: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the size limit. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:4404) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1436) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:173) at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:30059) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:637) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1742) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2417) Caused by: java.lang.IllegalStateException: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the size limit. at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:369) at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:347) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2478) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2313) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2121) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$1.call(NameNodeRpcServer.java:1439) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$1.call(NameNodeRpcServer.java:1436) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4463) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4442) Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the size limit. at com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110) at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755) at com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769) at com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462) at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:365) ... 9 more at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1443) at org.apache.hadoop.ipc.Client.call(Client.java:1402) at org.apache.hadoop.ipc.Client.call(Client.java:1352) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy21.blockReport(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:204) at org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:86) This is an unusual situation, but we should provide a way for it to succeed.
          Hide
          cnauroth Chris Nauroth added a comment -

          The attached patch passes the value of ipc.maximum.data.length through to the block list decoding layer, and then applies it as an override to the protobuf classes. I considered introducing a new configuration property, but ultimately I decided against it, because the admin would just have to tune 2 things in sync if they encountered this problem. I maintained a few of the old method signatures that don't include the max length and annotated them VisibleForTesting to avoid larger impact on existing tests. The new test suite demonstrates the problem and the fix.

          Show
          cnauroth Chris Nauroth added a comment - The attached patch passes the value of ipc.maximum.data.length through to the block list decoding layer, and then applies it as an override to the protobuf classes. I considered introducing a new configuration property, but ultimately I decided against it, because the admin would just have to tune 2 things in sync if they encountered this problem. I maintained a few of the old method signatures that don't include the max length and annotated them VisibleForTesting to avoid larger impact on existing tests. The new test suite demonstrates the problem and the fix.
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks Chris Nauroth for posting the fix along with your analysis. The patch looks good to me. +1 pending Jenkins.

          Show
          xyao Xiaoyu Yao added a comment - Thanks Chris Nauroth for posting the fix along with your analysis. The patch looks good to me. +1 pending Jenkins.
          Hide
          xyao Xiaoyu Yao added a comment -

          As a follow up, I suggest we document this ipc.maximum.data.length key. Currently, I can't find information about it in the core-default.xml.

          Show
          xyao Xiaoyu Yao added a comment - As a follow up, I suggest we document this ipc.maximum.data.length key. Currently, I can't find information about it in the core-default.xml.
          Hide
          liuml07 Mingliang Liu added a comment -

          +1 (non-binding)

          One nit is that, in the unit test testBlockReportExceedsLengthLimit(), we can add a fail("Should have failed because of the too long RPC data length"); as the last statement of the try block.

          Show
          liuml07 Mingliang Liu added a comment - +1 (non-binding) One nit is that, in the unit test testBlockReportExceedsLengthLimit() , we can add a fail("Should have failed because of the too long RPC data length"); as the last statement of the try block.
          Hide
          cnauroth Chris Nauroth added a comment -

          Mingliang Liu and Xiaoyu Yao, thank you for the code reviews. That's a great catch on the lack of fail in the test. I'm attaching patch v002 with the fix.

          Show
          cnauroth Chris Nauroth added a comment - Mingliang Liu and Xiaoyu Yao , thank you for the code reviews. That's a great catch on the lack of fail in the test. I'm attaching patch v002 with the fix.
          Hide
          cnauroth Chris Nauroth added a comment -

          Here is patch v003 with one more change in the test. I found that all of the bogus block IDs were causing a lot of log spam and slowing down the test, particularly for the block state change messages and the FsDatasetImpl "Failed to delete replica" messages. I've changed the test to set log level to WARN for these. That skips the log spam and speeds up the test quite a bit.

          Show
          cnauroth Chris Nauroth added a comment - Here is patch v003 with one more change in the test. I found that all of the bogus block IDs were causing a lot of log spam and slowing down the test, particularly for the block state change messages and the FsDatasetImpl "Failed to delete replica" messages. I've changed the test to set log level to WARN for these. That skips the log spam and speeds up the test quite a bit.
          Hide
          liuml07 Mingliang Liu added a comment -

          Shall we create a new jira for this?

          Show
          liuml07 Mingliang Liu added a comment - Shall we create a new jira for this?
          Hide
          cnauroth Chris Nauroth added a comment -

          Yes, that's going to be a small change, but technically it should be grouped as a HADOOP JIRA, not HDFS. I created HADOOP-13039.

          Show
          cnauroth Chris Nauroth added a comment - Yes, that's going to be a small change, but technically it should be grouped as a HADOOP JIRA, not HDFS. I created HADOOP-13039 .
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 11s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 6m 44s trunk passed
          +1 compile 0m 40s trunk passed with JDK v1.8.0_77
          +1 compile 0m 42s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 24s trunk passed
          +1 mvnsite 0m 52s trunk passed
          +1 mvneclipse 0m 13s trunk passed
          +1 findbugs 1m 56s trunk passed
          +1 javadoc 1m 8s trunk passed with JDK v1.8.0_77
          +1 javadoc 1m 46s trunk passed with JDK v1.7.0_95
          +1 mvninstall 0m 46s the patch passed
          +1 compile 0m 36s the patch passed with JDK v1.8.0_77
          +1 javac 0m 36s the patch passed
          +1 compile 0m 39s the patch passed with JDK v1.7.0_95
          +1 javac 0m 39s the patch passed
          -1 checkstyle 0m 21s hadoop-hdfs-project/hadoop-hdfs: patch generated 3 new + 244 unchanged - 1 fixed = 247 total (was 245)
          +1 mvnsite 0m 50s the patch passed
          +1 mvneclipse 0m 11s the patch passed
          -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 findbugs 2m 9s the patch passed
          +1 javadoc 1m 4s the patch passed with JDK v1.8.0_77
          +1 javadoc 1m 46s the patch passed with JDK v1.7.0_95
          -1 unit 31m 25s hadoop-hdfs in the patch failed with JDK v1.8.0_77.
          -1 unit 0m 34s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          -1 asflicense 0m 22s Patch generated 1 ASF License warnings.
          57m 29s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799603/HDFS-10312.002.patch
          JIRA Issue HDFS-10312
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 677ff777c2a3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / af9bdbe
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15200/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/15200/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15200/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15200/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15200/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/15200/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15200/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 11s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 6m 44s trunk passed +1 compile 0m 40s trunk passed with JDK v1.8.0_77 +1 compile 0m 42s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 24s trunk passed +1 mvnsite 0m 52s trunk passed +1 mvneclipse 0m 13s trunk passed +1 findbugs 1m 56s trunk passed +1 javadoc 1m 8s trunk passed with JDK v1.8.0_77 +1 javadoc 1m 46s trunk passed with JDK v1.7.0_95 +1 mvninstall 0m 46s the patch passed +1 compile 0m 36s the patch passed with JDK v1.8.0_77 +1 javac 0m 36s the patch passed +1 compile 0m 39s the patch passed with JDK v1.7.0_95 +1 javac 0m 39s the patch passed -1 checkstyle 0m 21s hadoop-hdfs-project/hadoop-hdfs: patch generated 3 new + 244 unchanged - 1 fixed = 247 total (was 245) +1 mvnsite 0m 50s the patch passed +1 mvneclipse 0m 11s the patch passed -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 findbugs 2m 9s the patch passed +1 javadoc 1m 4s the patch passed with JDK v1.8.0_77 +1 javadoc 1m 46s the patch passed with JDK v1.7.0_95 -1 unit 31m 25s hadoop-hdfs in the patch failed with JDK v1.8.0_77. -1 unit 0m 34s hadoop-hdfs in the patch failed with JDK v1.7.0_95. -1 asflicense 0m 22s Patch generated 1 ASF License warnings. 57m 29s Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799603/HDFS-10312.002.patch JIRA Issue HDFS-10312 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 677ff777c2a3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / af9bdbe Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15200/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/15200/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/15200/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/15200/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15200/testReport/ asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/15200/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15200/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          Patch v004 addresses the Checkstyle warnings.

          Show
          cnauroth Chris Nauroth added a comment - Patch v004 addresses the Checkstyle warnings.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Hi Chris Nauroth, +1 for the v4 patch. Thanks for this improvement.

          The attached test-delta.patch reduces the test runtime from ~30 seconds to ~3 seconds by using a lower message size limit. What do you think?

          Also (and Chris etc. know this of course!), it is far from ideal to have ~6 million blocks on one storage directory. We should add a warning when we document this setting.

          Show
          arpitagarwal Arpit Agarwal added a comment - Hi Chris Nauroth , +1 for the v4 patch. Thanks for this improvement. The attached test-delta.patch reduces the test runtime from ~30 seconds to ~3 seconds by using a lower message size limit. What do you think? Also (and Chris etc. know this of course!), it is far from ideal to have ~6 million blocks on one storage directory. We should add a warning when we document this setting.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          -1 patch 0m 15s HDFS-10312 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help.



          Subsystem Report/Notes
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799625/test-delta.patch
          JIRA Issue HDFS-10312
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15207/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. -1 patch 0m 15s HDFS-10312 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799625/test-delta.patch JIRA Issue HDFS-10312 Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15207/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Pasting the delta inline to avoid confusing Jenkins. I'll kick off a build manually.

          diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestLargeBlockReport.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestLargeBlockReport.java
          index bd9c0a2..0dff33f 100644
          --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestLargeBlockReport.java
          +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestLargeBlockReport.java
          @@ -74,10 +74,10 @@ public void tearDown() {
           
             @Test
             public void testBlockReportExceedsLengthLimit() throws Exception {
          -    initCluster();
          +    initCluster(1024 * 1024);
               // Create a large enough report that we expect it will go beyond the RPC
               // server's length validation, and also protobuf length validation.
          -    StorageBlockReport[] reports = createReports(6000000);
          +    StorageBlockReport[] reports = createReports(200000);
               try {
                 nnProxy.blockReport(bpRegistration, bpId, reports,
                     new BlockReportContext(1, 0, reportId, fullBrLeaseId, sorted));
          @@ -91,9 +91,8 @@ public void testBlockReportExceedsLengthLimit() throws Exception {
           
             @Test
             public void testBlockReportSucceedsWithLargerLengthLimit() throws Exception {
          -    conf.setInt(IPC_MAXIMUM_DATA_LENGTH, 128 * 1024 * 1024); // 128 MB
          -    initCluster();
          -    StorageBlockReport[] reports = createReports(6000000);
          +    initCluster(2 * 1024 * 1024);
          +    StorageBlockReport[] reports = createReports(200000);
               nnProxy.blockReport(bpRegistration, bpId, reports,
                   new BlockReportContext(1, 0, reportId, fullBrLeaseId, sorted));
             }
          @@ -129,7 +128,8 @@ public void testBlockReportSucceedsWithLargerLengthLimit() throws Exception {
              *
              * @throws Exception if initialization fails
              */
          -  private void initCluster() throws Exception {
          +  private void initCluster(int ipcMaxDataLength) throws Exception {
          +    conf.setInt(IPC_MAXIMUM_DATA_LENGTH, ipcMaxDataLength);
               cluster = new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
               cluster.waitActive();
               dn = cluster.getDataNodes().get(0);
          
          Show
          arpitagarwal Arpit Agarwal added a comment - Pasting the delta inline to avoid confusing Jenkins. I'll kick off a build manually. diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestLargeBlockReport.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestLargeBlockReport.java index bd9c0a2..0dff33f 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestLargeBlockReport.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestLargeBlockReport.java @@ -74,10 +74,10 @@ public void tearDown() { @Test public void testBlockReportExceedsLengthLimit() throws Exception { - initCluster(); + initCluster(1024 * 1024); // Create a large enough report that we expect it will go beyond the RPC // server's length validation, and also protobuf length validation. - StorageBlockReport[] reports = createReports(6000000); + StorageBlockReport[] reports = createReports(200000); try { nnProxy.blockReport(bpRegistration, bpId, reports, new BlockReportContext(1, 0, reportId, fullBrLeaseId, sorted)); @@ -91,9 +91,8 @@ public void testBlockReportExceedsLengthLimit() throws Exception { @Test public void testBlockReportSucceedsWithLargerLengthLimit() throws Exception { - conf.setInt(IPC_MAXIMUM_DATA_LENGTH, 128 * 1024 * 1024); // 128 MB - initCluster(); - StorageBlockReport[] reports = createReports(6000000); + initCluster(2 * 1024 * 1024); + StorageBlockReport[] reports = createReports(200000); nnProxy.blockReport(bpRegistration, bpId, reports, new BlockReportContext(1, 0, reportId, fullBrLeaseId, sorted)); } @@ -129,7 +128,8 @@ public void testBlockReportSucceedsWithLargerLengthLimit() throws Exception { * * @ throws Exception if initialization fails */ - private void initCluster() throws Exception { + private void initCluster( int ipcMaxDataLength) throws Exception { + conf.setInt(IPC_MAXIMUM_DATA_LENGTH, ipcMaxDataLength); cluster = new MiniDFSCluster.Builder(conf).numDataNodes(1).build(); cluster.waitActive(); dn = cluster.getDataNodes().get(0);
          Hide
          cnauroth Chris Nauroth added a comment -

          Arpit Agarwal, I like your suggestion for speeding up the test. Unfortunately, I think this doesn't quite give us the same test coverage. To demonstrate this, apply patch v004, then revert the src/main changes, and then run the test. It will fail on a protobuf decoding exception. That's exactly the condition we want to test, and the src/main changes make the test pass. After applying the delta, that's no longer true. The test passes with or without the src/main changes. That's because with the smaller block report sizes, we don't hit the internal protobuf default of 64 MB maximum. Using a block report size of 6000000, we definitely push over 64 MB for the RPC message size, so we definitely trigger the right condition.

          Show
          cnauroth Chris Nauroth added a comment - Arpit Agarwal , I like your suggestion for speeding up the test. Unfortunately, I think this doesn't quite give us the same test coverage. To demonstrate this, apply patch v004, then revert the src/main changes, and then run the test. It will fail on a protobuf decoding exception. That's exactly the condition we want to test, and the src/main changes make the test pass. After applying the delta, that's no longer true. The test passes with or without the src/main changes. That's because with the smaller block report sizes, we don't hit the internal protobuf default of 64 MB maximum. Using a block report size of 6000000, we definitely push over 64 MB for the RPC message size, so we definitely trigger the right condition.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 6m 49s trunk passed
          +1 compile 0m 40s trunk passed with JDK v1.8.0_77
          +1 compile 0m 44s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 27s trunk passed
          +1 mvnsite 0m 55s trunk passed
          +1 mvneclipse 0m 15s trunk passed
          +1 findbugs 2m 0s trunk passed
          +1 javadoc 1m 8s trunk passed with JDK v1.8.0_77
          +1 javadoc 1m 48s trunk passed with JDK v1.7.0_95
          +1 mvninstall 0m 51s the patch passed
          +1 compile 0m 40s the patch passed with JDK v1.8.0_77
          +1 javac 0m 40s the patch passed
          +1 compile 0m 40s the patch passed with JDK v1.7.0_95
          +1 javac 0m 40s the patch passed
          -1 checkstyle 0m 20s hadoop-hdfs-project/hadoop-hdfs: patch generated 3 new + 244 unchanged - 1 fixed = 247 total (was 245)
          +1 mvnsite 0m 52s the patch passed
          +1 mvneclipse 0m 11s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 2m 15s the patch passed
          +1 javadoc 1m 7s the patch passed with JDK v1.8.0_77
          +1 javadoc 1m 52s the patch passed with JDK v1.7.0_95
          -1 unit 73m 40s hadoop-hdfs in the patch failed with JDK v1.8.0_77.
          -1 unit 69m 53s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          +1 asflicense 0m 20s Patch does not generate ASF License warnings.
          169m 55s



          Reason Tests
          JDK v1.8.0_77 Failed junit tests hadoop.hdfs.TestFileAppend
            hadoop.hdfs.server.namenode.TestNamenodeRetryCache
            hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
          JDK v1.7.0_95 Failed junit tests hadoop.hdfs.TestHFlush
            hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl
            hadoop.hdfs.server.namenode.TestNamenodeRetryCache
            hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
            hadoop.hdfs.server.datanode.TestFsDatasetCache



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799578/HDFS-10312.001.patch
          JIRA Issue HDFS-10312
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 55397ea62d2d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / af9bdbe
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15199/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15199/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15199/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15199/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15199/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15199/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15199/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 6m 49s trunk passed +1 compile 0m 40s trunk passed with JDK v1.8.0_77 +1 compile 0m 44s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 27s trunk passed +1 mvnsite 0m 55s trunk passed +1 mvneclipse 0m 15s trunk passed +1 findbugs 2m 0s trunk passed +1 javadoc 1m 8s trunk passed with JDK v1.8.0_77 +1 javadoc 1m 48s trunk passed with JDK v1.7.0_95 +1 mvninstall 0m 51s the patch passed +1 compile 0m 40s the patch passed with JDK v1.8.0_77 +1 javac 0m 40s the patch passed +1 compile 0m 40s the patch passed with JDK v1.7.0_95 +1 javac 0m 40s the patch passed -1 checkstyle 0m 20s hadoop-hdfs-project/hadoop-hdfs: patch generated 3 new + 244 unchanged - 1 fixed = 247 total (was 245) +1 mvnsite 0m 52s the patch passed +1 mvneclipse 0m 11s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 15s the patch passed +1 javadoc 1m 7s the patch passed with JDK v1.8.0_77 +1 javadoc 1m 52s the patch passed with JDK v1.7.0_95 -1 unit 73m 40s hadoop-hdfs in the patch failed with JDK v1.8.0_77. -1 unit 69m 53s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 20s Patch does not generate ASF License warnings. 169m 55s Reason Tests JDK v1.8.0_77 Failed junit tests hadoop.hdfs.TestFileAppend   hadoop.hdfs.server.namenode.TestNamenodeRetryCache   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA JDK v1.7.0_95 Failed junit tests hadoop.hdfs.TestHFlush   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl   hadoop.hdfs.server.namenode.TestNamenodeRetryCache   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA   hadoop.hdfs.server.datanode.TestFsDatasetCache Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799578/HDFS-10312.001.patch JIRA Issue HDFS-10312 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 55397ea62d2d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / af9bdbe Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15199/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/15199/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/15199/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15199/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15199/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15199/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15199/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Yeah I agree that is rather unfortunate since the change to the message length is not plumbed without your patch.

          I think the missing code paths can be tested with a targeted pre-condition that ensures any change to the config setting is propagated to the BufferDecoder (and the CodedInputStream) and that pre-condition will fail without your src/main changes. However it's okay to evaluate it in a follow up Jira and we don't need to hold up this one.

          +1 from me.

          Show
          arpitagarwal Arpit Agarwal added a comment - Yeah I agree that is rather unfortunate since the change to the message length is not plumbed without your patch. I think the missing code paths can be tested with a targeted pre-condition that ensures any change to the config setting is propagated to the BufferDecoder (and the CodedInputStream) and that pre-condition will fail without your src/main changes. However it's okay to evaluate it in a follow up Jira and we don't need to hold up this one. +1 from me.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 10s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          -1 mvninstall 5m 31s root in trunk failed.
          -1 compile 0m 9s hadoop-hdfs in trunk failed with JDK v1.8.0_77.
          +1 compile 0m 42s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 23s trunk passed
          +1 mvnsite 0m 48s trunk passed
          +1 mvneclipse 0m 13s trunk passed
          +1 findbugs 1m 53s trunk passed
          +1 javadoc 1m 5s trunk passed with JDK v1.8.0_77
          +1 javadoc 1m 44s trunk passed with JDK v1.7.0_95
          +1 mvninstall 0m 45s the patch passed
          +1 compile 0m 39s the patch passed with JDK v1.8.0_77
          -1 javac 6m 18s hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77 with JDK v1.8.0_77 generated 33 new + 0 unchanged - 0 fixed = 33 total (was 0)
          +1 javac 0m 39s the patch passed
          +1 compile 0m 38s the patch passed with JDK v1.7.0_95
          +1 javac 0m 38s the patch passed
          -1 checkstyle 0m 20s hadoop-hdfs-project/hadoop-hdfs: patch generated 3 new + 244 unchanged - 1 fixed = 247 total (was 245)
          +1 mvnsite 0m 48s the patch passed
          +1 mvneclipse 0m 11s the patch passed
          -1 whitespace 0m 0s The patch has 476 line(s) that end in whitespace. Use git apply --whitespace=fix.
          -1 whitespace 0m 9s The patch has 384 line(s) with tabs.
          +1 findbugs 2m 6s the patch passed
          +1 javadoc 1m 3s the patch passed with JDK v1.8.0_77
          +1 javadoc 1m 45s the patch passed with JDK v1.7.0_95
          -1 unit 60m 4s hadoop-hdfs in the patch failed with JDK v1.8.0_77.
          -1 unit 59m 25s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          -1 asflicense 0m 23s Patch generated 1 ASF License warnings.
          143m 10s



          Reason Tests
          JDK v1.8.0_77 Failed junit tests hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl
            hadoop.hdfs.server.namenode.TestNamenodeRetryCache
            hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
          JDK v1.7.0_95 Failed junit tests hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl
            hadoop.hdfs.TestHFlush
            hadoop.hdfs.shortcircuit.TestShortCircuitCache
            hadoop.hdfs.server.namenode.TestEditLog
            hadoop.hdfs.server.namenode.TestNamenodeRetryCache
            hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799611/HDFS-10312.003.patch
          JIRA Issue HDFS-10312
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux c931bce5be3e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / af9bdbe
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/branch-mvninstall-root.txt
          compile https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt
          findbugs v3.0.0
          javac hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77: https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/whitespace-eol.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/whitespace-tabs.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15203/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15203/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 10s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. -1 mvninstall 5m 31s root in trunk failed. -1 compile 0m 9s hadoop-hdfs in trunk failed with JDK v1.8.0_77. +1 compile 0m 42s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 23s trunk passed +1 mvnsite 0m 48s trunk passed +1 mvneclipse 0m 13s trunk passed +1 findbugs 1m 53s trunk passed +1 javadoc 1m 5s trunk passed with JDK v1.8.0_77 +1 javadoc 1m 44s trunk passed with JDK v1.7.0_95 +1 mvninstall 0m 45s the patch passed +1 compile 0m 39s the patch passed with JDK v1.8.0_77 -1 javac 6m 18s hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77 with JDK v1.8.0_77 generated 33 new + 0 unchanged - 0 fixed = 33 total (was 0) +1 javac 0m 39s the patch passed +1 compile 0m 38s the patch passed with JDK v1.7.0_95 +1 javac 0m 38s the patch passed -1 checkstyle 0m 20s hadoop-hdfs-project/hadoop-hdfs: patch generated 3 new + 244 unchanged - 1 fixed = 247 total (was 245) +1 mvnsite 0m 48s the patch passed +1 mvneclipse 0m 11s the patch passed -1 whitespace 0m 0s The patch has 476 line(s) that end in whitespace. Use git apply --whitespace=fix. -1 whitespace 0m 9s The patch has 384 line(s) with tabs. +1 findbugs 2m 6s the patch passed +1 javadoc 1m 3s the patch passed with JDK v1.8.0_77 +1 javadoc 1m 45s the patch passed with JDK v1.7.0_95 -1 unit 60m 4s hadoop-hdfs in the patch failed with JDK v1.8.0_77. -1 unit 59m 25s hadoop-hdfs in the patch failed with JDK v1.7.0_95. -1 asflicense 0m 23s Patch generated 1 ASF License warnings. 143m 10s Reason Tests JDK v1.8.0_77 Failed junit tests hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl   hadoop.hdfs.server.namenode.TestNamenodeRetryCache   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA JDK v1.7.0_95 Failed junit tests hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl   hadoop.hdfs.TestHFlush   hadoop.hdfs.shortcircuit.TestShortCircuitCache   hadoop.hdfs.server.namenode.TestEditLog   hadoop.hdfs.server.namenode.TestNamenodeRetryCache   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799611/HDFS-10312.003.patch JIRA Issue HDFS-10312 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux c931bce5be3e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / af9bdbe Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/branch-mvninstall-root.txt compile https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt findbugs v3.0.0 javac hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77: https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/whitespace-eol.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/whitespace-tabs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15203/testReport/ asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/15203/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15203/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          +1 mvninstall 7m 22s trunk passed
          +1 compile 0m 55s trunk passed with JDK v1.8.0_77
          +1 compile 0m 43s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 23s trunk passed
          +1 mvnsite 0m 56s trunk passed
          +1 mvneclipse 0m 14s trunk passed
          +1 findbugs 2m 4s trunk passed
          +1 javadoc 1m 17s trunk passed with JDK v1.8.0_77
          +1 javadoc 1m 56s trunk passed with JDK v1.7.0_95
          +1 mvninstall 0m 48s the patch passed
          +1 compile 0m 49s the patch passed with JDK v1.8.0_77
          +1 javac 0m 49s the patch passed
          +1 compile 0m 40s the patch passed with JDK v1.7.0_95
          +1 javac 0m 40s the patch passed
          -1 checkstyle 0m 22s hadoop-hdfs-project/hadoop-hdfs: patch generated 3 new + 244 unchanged - 1 fixed = 247 total (was 245)
          +1 mvnsite 0m 56s the patch passed
          +1 mvneclipse 0m 12s the patch passed
          -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 findbugs 2m 15s the patch passed
          +1 javadoc 1m 16s the patch passed with JDK v1.8.0_77
          +1 javadoc 1m 55s the patch passed with JDK v1.7.0_95
          -1 unit 80m 35s hadoop-hdfs in the patch failed with JDK v1.8.0_77.
          -1 unit 95m 11s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          +1 asflicense 0m 31s Patch does not generate ASF License warnings.
          204m 0s



          Reason Tests
          JDK v1.8.0_77 Failed junit tests hadoop.hdfs.server.namenode.TestNamenodeRetryCache
            hadoop.hdfs.server.namenode.ha.TestEditLogTailer
            hadoop.hdfs.server.blockmanagement.TestBlockManager
            hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
            hadoop.hdfs.TestFileAppend
            hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
          JDK v1.7.0_95 Failed junit tests hadoop.hdfs.server.namenode.TestNamenodeRetryCache
            hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
            hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
            hadoop.fs.TestSymlinkHdfsFileContext
            hadoop.hdfs.server.namenode.TestFileTruncate
          JDK v1.7.0_95 Timed out junit tests org.apache.hadoop.hdfs.TestDatanodeRegistration



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799611/HDFS-10312.003.patch
          JIRA Issue HDFS-10312
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux c3401102ce70 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / af9bdbe
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15202/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/15202/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15202/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15202/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15202/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15202/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15202/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15202/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 7m 22s trunk passed +1 compile 0m 55s trunk passed with JDK v1.8.0_77 +1 compile 0m 43s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 23s trunk passed +1 mvnsite 0m 56s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 2m 4s trunk passed +1 javadoc 1m 17s trunk passed with JDK v1.8.0_77 +1 javadoc 1m 56s trunk passed with JDK v1.7.0_95 +1 mvninstall 0m 48s the patch passed +1 compile 0m 49s the patch passed with JDK v1.8.0_77 +1 javac 0m 49s the patch passed +1 compile 0m 40s the patch passed with JDK v1.7.0_95 +1 javac 0m 40s the patch passed -1 checkstyle 0m 22s hadoop-hdfs-project/hadoop-hdfs: patch generated 3 new + 244 unchanged - 1 fixed = 247 total (was 245) +1 mvnsite 0m 56s the patch passed +1 mvneclipse 0m 12s the patch passed -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 findbugs 2m 15s the patch passed +1 javadoc 1m 16s the patch passed with JDK v1.8.0_77 +1 javadoc 1m 55s the patch passed with JDK v1.7.0_95 -1 unit 80m 35s hadoop-hdfs in the patch failed with JDK v1.8.0_77. -1 unit 95m 11s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 31s Patch does not generate ASF License warnings. 204m 0s Reason Tests JDK v1.8.0_77 Failed junit tests hadoop.hdfs.server.namenode.TestNamenodeRetryCache   hadoop.hdfs.server.namenode.ha.TestEditLogTailer   hadoop.hdfs.server.blockmanagement.TestBlockManager   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA   hadoop.hdfs.TestFileAppend   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA JDK v1.7.0_95 Failed junit tests hadoop.hdfs.server.namenode.TestNamenodeRetryCache   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes   hadoop.fs.TestSymlinkHdfsFileContext   hadoop.hdfs.server.namenode.TestFileTruncate JDK v1.7.0_95 Timed out junit tests org.apache.hadoop.hdfs.TestDatanodeRegistration Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799611/HDFS-10312.003.patch JIRA Issue HDFS-10312 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux c3401102ce70 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / af9bdbe Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15202/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/15202/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/15202/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/15202/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15202/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15202/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15202/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15202/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 11s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          +1 mvninstall 7m 5s trunk passed
          +1 compile 0m 40s trunk passed with JDK v1.8.0_77
          +1 compile 0m 45s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 24s trunk passed
          +1 mvnsite 0m 55s trunk passed
          +1 mvneclipse 0m 14s trunk passed
          +1 findbugs 2m 2s trunk passed
          +1 javadoc 1m 8s trunk passed with JDK v1.8.0_77
          +1 javadoc 1m 57s trunk passed with JDK v1.7.0_95
          +1 mvninstall 0m 47s the patch passed
          +1 compile 0m 39s the patch passed with JDK v1.8.0_77
          +1 javac 0m 39s the patch passed
          +1 compile 0m 41s the patch passed with JDK v1.7.0_95
          +1 javac 0m 41s the patch passed
          -1 checkstyle 0m 24s hadoop-hdfs-project/hadoop-hdfs: patch generated 1 new + 242 unchanged - 3 fixed = 243 total (was 245)
          +1 mvnsite 0m 53s the patch passed
          +1 mvneclipse 0m 12s the patch passed
          -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 findbugs 2m 16s the patch passed
          +1 javadoc 1m 9s the patch passed with JDK v1.8.0_77
          +1 javadoc 1m 42s the patch passed with JDK v1.7.0_95
          -1 unit 59m 16s hadoop-hdfs in the patch failed with JDK v1.8.0_77.
          -1 unit 55m 1s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          +1 asflicense 0m 21s Patch does not generate ASF License warnings.
          140m 43s



          Reason Tests
          JDK v1.8.0_77 Failed junit tests hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
            hadoop.hdfs.server.namenode.TestNamenodeRetryCache
            hadoop.hdfs.TestDFSUpgradeFromImage
          JDK v1.7.0_95 Failed junit tests hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
            hadoop.hdfs.TestHFlush
            hadoop.hdfs.server.namenode.TestNamenodeRetryCache
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicy



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799621/HDFS-10312.004.patch
          JIRA Issue HDFS-10312
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux c989188cbae7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / af9bdbe
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15208/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/15208/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15208/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15208/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15208/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15208/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15208/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15208/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 11s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 7m 5s trunk passed +1 compile 0m 40s trunk passed with JDK v1.8.0_77 +1 compile 0m 45s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 24s trunk passed +1 mvnsite 0m 55s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 2m 2s trunk passed +1 javadoc 1m 8s trunk passed with JDK v1.8.0_77 +1 javadoc 1m 57s trunk passed with JDK v1.7.0_95 +1 mvninstall 0m 47s the patch passed +1 compile 0m 39s the patch passed with JDK v1.8.0_77 +1 javac 0m 39s the patch passed +1 compile 0m 41s the patch passed with JDK v1.7.0_95 +1 javac 0m 41s the patch passed -1 checkstyle 0m 24s hadoop-hdfs-project/hadoop-hdfs: patch generated 1 new + 242 unchanged - 3 fixed = 243 total (was 245) +1 mvnsite 0m 53s the patch passed +1 mvneclipse 0m 12s the patch passed -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 findbugs 2m 16s the patch passed +1 javadoc 1m 9s the patch passed with JDK v1.8.0_77 +1 javadoc 1m 42s the patch passed with JDK v1.7.0_95 -1 unit 59m 16s hadoop-hdfs in the patch failed with JDK v1.8.0_77. -1 unit 55m 1s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 21s Patch does not generate ASF License warnings. 140m 43s Reason Tests JDK v1.8.0_77 Failed junit tests hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA   hadoop.hdfs.server.namenode.TestNamenodeRetryCache   hadoop.hdfs.TestDFSUpgradeFromImage JDK v1.7.0_95 Failed junit tests hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA   hadoop.hdfs.TestHFlush   hadoop.hdfs.server.namenode.TestNamenodeRetryCache   hadoop.hdfs.server.blockmanagement.TestReplicationPolicy Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799621/HDFS-10312.004.patch JIRA Issue HDFS-10312 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux c989188cbae7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / af9bdbe Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15208/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/15208/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/15208/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/15208/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15208/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15208/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15208/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15208/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          brahmareddy Brahma Reddy Battula added a comment -

          We've seen same issue and reported HDFS-8574 . As per discussion , it can be solved by HDFS-9011 but did not seen any progress there.
          As colin suggested there, "It would be simpler for the admin to create two (or more) storages on the same drive, and it wouldn't involve any code modification by us."

          Even now numofblocks per volume are exposed ( HDFS-9425) such that admin can monitor this.

          Show
          brahmareddy Brahma Reddy Battula added a comment - We've seen same issue and reported HDFS-8574 . As per discussion , it can be solved by HDFS-9011 but did not seen any progress there. As colin suggested there, "It would be simpler for the admin to create two (or more) storages on the same drive, and it wouldn't involve any code modification by us." Even now numofblocks per volume are exposed ( HDFS-9425 ) such that admin can monitor this.
          Hide
          cnauroth Chris Nauroth added a comment -

          It appears the discussions in those other JIRAs missed the point that ipc.maximum.data.length controls only the maximum payload accepted by the RPC server. Without this patch, it is not sufficient to work around the size enforcement by protobuf, demonstrated in the stack trace that I included in prior comments. Asking admins to repartition blocks across multiple storages on the same drive isn't a viable workaround for them. HDFS-9011 is a much deeper change that will require further review. This patch is a simple way to unblock clusters that have already gotten into this state accidentally.

          Show
          cnauroth Chris Nauroth added a comment - It appears the discussions in those other JIRAs missed the point that ipc.maximum.data.length controls only the maximum payload accepted by the RPC server. Without this patch, it is not sufficient to work around the size enforcement by protobuf, demonstrated in the stack trace that I included in prior comments. Asking admins to repartition blocks across multiple storages on the same drive isn't a viable workaround for them. HDFS-9011 is a much deeper change that will require further review. This patch is a simple way to unblock clusters that have already gotten into this state accidentally.
          Hide
          cnauroth Chris Nauroth added a comment -

          The remaining Checkstyle warning is for a long method. It's best not to address it in scope of this patch.

          Show
          cnauroth Chris Nauroth added a comment - The remaining Checkstyle warning is for a long method. It's best not to address it in scope of this patch.
          Hide
          brahmareddy Brahma Reddy Battula added a comment -

          You are right. It would be non-trivial for admin's to split the existing storage directory to multiple storage directories.

          With your patch, to come out the current case, ipc.maximum.data.length should be changed in both NN and DN side.
          I am also fine with this approach.

          Show
          brahmareddy Brahma Reddy Battula added a comment - You are right. It would be non-trivial for admin's to split the existing storage directory to multiple storage directories. With your patch, to come out the current case, ipc.maximum.data.length should be changed in both NN and DN side. I am also fine with this approach.
          Hide
          cnauroth Chris Nauroth added a comment -

          With your patch, to come out the current case, ipc.maximum.data.length should be changed in both NN and DN side.

          The slightly strange thing is that it seems the 64 MB enforcement by protobuf only happens at time of decoding a message, not at time of creating the message. In my testing, I only saw problems on the server side consuming the message (the NameNode). I'm not sure that it would be strictly required to make the configuration change on DataNodes, but there is also no harm in doing it that way.

          Show
          cnauroth Chris Nauroth added a comment - With your patch, to come out the current case, ipc.maximum.data.length should be changed in both NN and DN side. The slightly strange thing is that it seems the 64 MB enforcement by protobuf only happens at time of decoding a message, not at time of creating the message. In my testing, I only saw problems on the server side consuming the message (the NameNode). I'm not sure that it would be strictly required to make the configuration change on DataNodes, but there is also no harm in doing it that way.
          Hide
          cnauroth Chris Nauroth added a comment -

          Thank you for the reviews anyone. The test failures were unrelated. I corrected the whitespace warning. I have committed this to trunk, branch-2 and branch-2.8.

          Show
          cnauroth Chris Nauroth added a comment - Thank you for the reviews anyone. The test failures were unrelated. I corrected the whitespace warning. I have committed this to trunk, branch-2 and branch-2.8.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #9637 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9637/)
          HDFS-10312. Large block reports may fail to decode at NameNode due to 64 (cnauroth: rev 63ac2db59af2b50e74dc892cae1dbc4d2e061423)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/BlockListAsLongs.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestLargeBlockReport.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9637 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9637/ ) HDFS-10312 . Large block reports may fail to decode at NameNode due to 64 (cnauroth: rev 63ac2db59af2b50e74dc892cae1dbc4d2e061423) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/BlockListAsLongs.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestLargeBlockReport.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java

            People

            • Assignee:
              cnauroth Chris Nauroth
              Reporter:
              cnauroth Chris Nauroth
            • Votes:
              0 Vote for this issue
              Watchers:
              13 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development