Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-11608

HDFS write crashed with block size greater than 2 GB

    Details

    • Hadoop Flags:
      Reviewed

      Description

      We've seen HDFS write crashes in the case of huge block size. For example, writing a 3 GB file using block size > 2 GB (e.g., 3 GB), HDFS client throws out of memory exception. DataNode gives out IOException. After changing heap size limit, DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe and pipeline recovery.

      Give below:
      DN exception,

      2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
      java.io.IOException: Incorrect value for packet payload size: 2147483128
              at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
              at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
              at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
              at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
              at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
              at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
              at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
              at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
              at java.lang.Thread.run(Thread.java:745)
      
      1. HDFS-11608.000.patch
        1 kB
        Xiaobing Zhou
      2. HDFS-11608.001.patch
        36 kB
        Xiaobing Zhou
      3. HDFS-11608.002.patch
        11 kB
        Xiaobing Zhou
      4. HDFS-11608.003.patch
        11 kB
        Xiaobing Zhou
      5. HDFS-11608-branch-2.7.003.patch
        13 kB
        Xiaobing Zhou

        Issue Links

          Activity

          Hide
          vinodkv Vinod Kumar Vavilapalli added a comment -

          2.8.1 became a security release. Moving fix-version to 2.8.2 after the fact.

          Show
          vinodkv Vinod Kumar Vavilapalli added a comment - 2.8.1 became a security release. Moving fix-version to 2.8.2 after the fact.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11591 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11591/)
          HDFS-11608. HDFS write crashed with block size greater than 2 GB. (xyao: rev 0eacd4c13be9bad0fbed9421a6539c64bbda4df1)

          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PacketReceiver.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11591 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11591/ ) HDFS-11608 . HDFS write crashed with block size greater than 2 GB. (xyao: rev 0eacd4c13be9bad0fbed9421a6539c64bbda4df1) (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PacketReceiver.java
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          +1 for the branch-2.7 patch.

          I've committed it after running the affected unit tests with JDK7 locally. Thanks Xiaobing Zhou.

          Show
          arpitagarwal Arpit Agarwal added a comment - +1 for the branch-2.7 patch. I've committed it after running the affected unit tests with JDK7 locally. Thanks Xiaobing Zhou .
          Hide
          xiaobingo Xiaobing Zhou added a comment -

          Posted 2.7 patch. Thanks Xiaoyu Yao for committing it and all for reviews.

          Show
          xiaobingo Xiaobing Zhou added a comment - Posted 2.7 patch. Thanks Xiaoyu Yao for committing it and all for reviews.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11543 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11543/)
          HDFS-11608. HDFS write crashed with block size greater than 2 GB. (xyao: rev 0eacd4c13be9bad0fbed9421a6539c64bbda4df1)

          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PacketReceiver.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11543 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11543/ ) HDFS-11608 . HDFS write crashed with block size greater than 2 GB. (xyao: rev 0eacd4c13be9bad0fbed9421a6539c64bbda4df1) (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PacketReceiver.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          Hide
          xyao Xiaoyu Yao added a comment - - edited

          Thanks Xiaobing Zhou for the contribution and all for the reviews and discussions. I commit the patch to trunk, branch-2 and branch-2.8.

          Xiaobing Zhou can you help preparing a patch for branch-2.7 which has the same issue?

          Show
          xyao Xiaoyu Yao added a comment - - edited Thanks Xiaobing Zhou for the contribution and all for the reviews and discussions. I commit the patch to trunk, branch-2 and branch-2.8. Xiaobing Zhou can you help preparing a patch for branch-2.7 which has the same issue?
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks Xiaobing Zhou for the update. +1 for the v003 patch. I will commit it shortly.

          The Jenkins failure seems unrelated to this change and does not repor on my local machine.
          I opened HDFS-11632 to track the flaky unit test issue.

          Show
          xyao Xiaoyu Yao added a comment - Thanks Xiaobing Zhou for the update. +1 for the v003 patch. I will commit it shortly. The Jenkins failure seems unrelated to this change and does not repor on my local machine. I opened HDFS-11632 to track the flaky unit test issue.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 30s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 27s Maven dependency ordering for branch
          +1 mvninstall 15m 17s trunk passed
          +1 compile 1m 24s trunk passed
          +1 checkstyle 0m 41s trunk passed
          +1 mvnsite 1m 38s trunk passed
          +1 mvneclipse 0m 27s trunk passed
          +1 findbugs 3m 16s trunk passed
          +1 javadoc 1m 6s trunk passed
          0 mvndep 0m 7s Maven dependency ordering for patch
          +1 mvninstall 1m 32s the patch passed
          +1 compile 1m 31s the patch passed
          +1 javac 1m 31s the patch passed
          +1 checkstyle 0m 44s the patch passed
          +1 mvnsite 1m 41s the patch passed
          +1 mvneclipse 0m 27s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 4m 13s the patch passed
          +1 javadoc 1m 6s the patch passed
          +1 unit 1m 22s hadoop-hdfs-client in the patch passed.
          -1 unit 86m 9s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 21s The patch does not generate ASF License warnings.
          125m 41s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.namenode.TestCacheDirectives



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HDFS-11608
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12862356/HDFS-11608.003.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 7ecefc15c399 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 1a9439e
          Default Java 1.8.0_121
          findbugs v3.0.0
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/18997/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/18997/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/18997/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 30s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 27s Maven dependency ordering for branch +1 mvninstall 15m 17s trunk passed +1 compile 1m 24s trunk passed +1 checkstyle 0m 41s trunk passed +1 mvnsite 1m 38s trunk passed +1 mvneclipse 0m 27s trunk passed +1 findbugs 3m 16s trunk passed +1 javadoc 1m 6s trunk passed 0 mvndep 0m 7s Maven dependency ordering for patch +1 mvninstall 1m 32s the patch passed +1 compile 1m 31s the patch passed +1 javac 1m 31s the patch passed +1 checkstyle 0m 44s the patch passed +1 mvnsite 1m 41s the patch passed +1 mvneclipse 0m 27s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 4m 13s the patch passed +1 javadoc 1m 6s the patch passed +1 unit 1m 22s hadoop-hdfs-client in the patch passed. -1 unit 86m 9s hadoop-hdfs in the patch failed. +1 asflicense 0m 21s The patch does not generate ASF License warnings. 125m 41s Reason Tests Failed junit tests hadoop.hdfs.server.namenode.TestCacheDirectives Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HDFS-11608 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12862356/HDFS-11608.003.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 7ecefc15c399 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 1a9439e Default Java 1.8.0_121 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HDFS-Build/18997/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/18997/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project Console output https://builds.apache.org/job/PreCommit-HDFS-Build/18997/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          +1 for the v3 patch pending Jenkins. Thanks for adding the unit test Xiaobing!

          Show
          arpitagarwal Arpit Agarwal added a comment - +1 for the v3 patch pending Jenkins. Thanks for adding the unit test Xiaobing!
          Hide
          xiaobingo Xiaobing Zhou added a comment -

          Posted v3 with fix setting base dir for newly created cluster to avoid conflicts of shared root dir. This resolved the failure. Thanks Chen Liang for the check.

          dfsConf.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR,
                    baseDir.getAbsolutePath());
          
          Show
          xiaobingo Xiaobing Zhou added a comment - Posted v3 with fix setting base dir for newly created cluster to avoid conflicts of shared root dir. This resolved the failure. Thanks Chen Liang for the check. dfsConf.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR, baseDir.getAbsolutePath());
          Hide
          vagarychen Chen Liang added a comment -

          v002 patch LGTM, but it looks like TestDFSOutputStream#testNoLocalWriteFlag is consistently failing in my local runs. I noticed though, it is always the next test right after the newly added test TestDFSOutputStream#testPreventOverflow. Disabling the new test it will pass. I guess the new test modified cluster variable in some way, causing the next test testPreventOverflow to fail.

          All the other failed tests passed in my local run, so probably unrelated.

          Show
          vagarychen Chen Liang added a comment - v002 patch LGTM, but it looks like TestDFSOutputStream#testNoLocalWriteFlag is consistently failing in my local runs. I noticed though, it is always the next test right after the newly added test TestDFSOutputStream#testPreventOverflow . Disabling the new test it will pass. I guess the new test modified cluster variable in some way, causing the next test testPreventOverflow to fail. All the other failed tests passed in my local run, so probably unrelated.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          +1 for the fix. Thanks Xiaobing Zhou.

          Still need to review the unit test.

          Show
          arpitagarwal Arpit Agarwal added a comment - +1 for the fix. Thanks Xiaobing Zhou . Still need to review the unit test.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 26s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 27s Maven dependency ordering for branch
          +1 mvninstall 16m 21s trunk passed
          +1 compile 1m 33s trunk passed
          +1 checkstyle 0m 42s trunk passed
          +1 mvnsite 1m 36s trunk passed
          +1 mvneclipse 0m 29s trunk passed
          +1 findbugs 3m 32s trunk passed
          +1 javadoc 1m 7s trunk passed
          0 mvndep 0m 7s Maven dependency ordering for patch
          +1 mvninstall 1m 28s the patch passed
          +1 compile 1m 49s the patch passed
          +1 javac 1m 49s the patch passed
          +1 checkstyle 0m 41s the patch passed
          +1 mvnsite 1m 35s the patch passed
          +1 mvneclipse 0m 24s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 3m 55s the patch passed
          +1 javadoc 1m 6s the patch passed
          +1 unit 1m 13s hadoop-hdfs-client in the patch passed.
          -1 unit 77m 11s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 23s The patch does not generate ASF License warnings.
          117m 48s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
            hadoop.hdfs.TestDFSOutputStream
            hadoop.hdfs.server.datanode.checker.TestThrottledAsyncCheckerTimeout
            hadoop.hdfs.qjournal.client.TestQuorumJournalManager



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HDFS-11608
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12861981/HDFS-11608.002.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux d249501de219 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 9cc04b4
          Default Java 1.8.0_121
          findbugs v3.0.0
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/18976/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/18976/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/18976/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 26s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 27s Maven dependency ordering for branch +1 mvninstall 16m 21s trunk passed +1 compile 1m 33s trunk passed +1 checkstyle 0m 42s trunk passed +1 mvnsite 1m 36s trunk passed +1 mvneclipse 0m 29s trunk passed +1 findbugs 3m 32s trunk passed +1 javadoc 1m 7s trunk passed 0 mvndep 0m 7s Maven dependency ordering for patch +1 mvninstall 1m 28s the patch passed +1 compile 1m 49s the patch passed +1 javac 1m 49s the patch passed +1 checkstyle 0m 41s the patch passed +1 mvnsite 1m 35s the patch passed +1 mvneclipse 0m 24s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 3m 55s the patch passed +1 javadoc 1m 6s the patch passed +1 unit 1m 13s hadoop-hdfs-client in the patch passed. -1 unit 77m 11s hadoop-hdfs in the patch failed. +1 asflicense 0m 23s The patch does not generate ASF License warnings. 117m 48s Reason Tests Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.TestDFSOutputStream   hadoop.hdfs.server.datanode.checker.TestThrottledAsyncCheckerTimeout   hadoop.hdfs.qjournal.client.TestQuorumJournalManager Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HDFS-11608 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12861981/HDFS-11608.002.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux d249501de219 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9cc04b4 Default Java 1.8.0_121 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HDFS-Build/18976/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/18976/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project Console output https://builds.apache.org/job/PreCommit-HDFS-Build/18976/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xiaobingo Xiaobing Zhou added a comment -

          Posted v2. Thanks. Apologize for the formatting changes in a rush patch. Removed the check to constructor.

          Show
          xiaobingo Xiaobing Zhou added a comment - Posted v2. Thanks. Apologize for the formatting changes in a rush patch. Removed the check to constructor.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 30s Maven dependency ordering for branch
          +1 mvninstall 13m 36s trunk passed
          +1 compile 1m 38s trunk passed
          +1 checkstyle 0m 44s trunk passed
          +1 mvnsite 1m 38s trunk passed
          +1 mvneclipse 0m 28s trunk passed
          +1 findbugs 3m 27s trunk passed
          +1 javadoc 1m 7s trunk passed
          0 mvndep 0m 8s Maven dependency ordering for patch
          +1 mvninstall 1m 30s the patch passed
          +1 compile 1m 30s the patch passed
          +1 javac 1m 30s the patch passed
          -0 checkstyle 0m 42s hadoop-hdfs-project: The patch generated 4 new + 29 unchanged - 1 fixed = 33 total (was 30)
          +1 mvnsite 1m 30s the patch passed
          +1 mvneclipse 0m 28s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 3m 49s the patch passed
          +1 javadoc 0m 59s the patch passed
          +1 unit 0m 58s hadoop-hdfs-client in the patch passed.
          -1 unit 67m 52s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 22s The patch does not generate ASF License warnings.
          104m 40s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
            hadoop.hdfs.TestDFSOutputStream
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HDFS-11608
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12861944/HDFS-11608.001.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 2979e9e0e468 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 56ab02e
          Default Java 1.8.0_121
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/18972/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/18972/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/18972/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/18972/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 30s Maven dependency ordering for branch +1 mvninstall 13m 36s trunk passed +1 compile 1m 38s trunk passed +1 checkstyle 0m 44s trunk passed +1 mvnsite 1m 38s trunk passed +1 mvneclipse 0m 28s trunk passed +1 findbugs 3m 27s trunk passed +1 javadoc 1m 7s trunk passed 0 mvndep 0m 8s Maven dependency ordering for patch +1 mvninstall 1m 30s the patch passed +1 compile 1m 30s the patch passed +1 javac 1m 30s the patch passed -0 checkstyle 0m 42s hadoop-hdfs-project: The patch generated 4 new + 29 unchanged - 1 fixed = 33 total (was 30) +1 mvnsite 1m 30s the patch passed +1 mvneclipse 0m 28s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 3m 49s the patch passed +1 javadoc 0m 59s the patch passed +1 unit 0m 58s hadoop-hdfs-client in the patch passed. -1 unit 67m 52s hadoop-hdfs in the patch failed. +1 asflicense 0m 22s The patch does not generate ASF License warnings. 104m 40s Reason Tests Failed junit tests hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock   hadoop.hdfs.TestDFSOutputStream   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HDFS-11608 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12861944/HDFS-11608.001.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 2979e9e0e468 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 56ab02e Default Java 1.8.0_121 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/18972/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/18972/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/18972/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project Console output https://builds.apache.org/job/PreCommit-HDFS-Build/18972/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          arpitagarwal Arpit Agarwal added a comment -
                if (writePacketSize > PacketReceiver.MAX_PACKET_SIZE) {
                  LOG.warn(
                      "Configured write packet size is larger than 16M as max, using 16M.");
                  writePacketSize = PacketReceiver.MAX_PACKET_SIZE;
                }
          

          If the packet size is misconfigured this warning may be logged very verbosely since this method is called many times during a large data transfer. I think we can move this warning to the DfsClientConf constructor, or maybe just remove it altogether and set packet size to PacketReceiver.MAX_PACKET_SIZE silently.

          Also the check can be moved to the DFSOutputStream constructor, so DFSOutputStream#writePacketSize is always capped at MAX_PACKET_SIZE.

          Also thanks for writing up the unit test. I am looking at the test case, hopefully it can be simplified to not use reflection.

          Show
          arpitagarwal Arpit Agarwal added a comment - if (writePacketSize > PacketReceiver.MAX_PACKET_SIZE) { LOG.warn( "Configured write packet size is larger than 16M as max, using 16M." ); writePacketSize = PacketReceiver.MAX_PACKET_SIZE; } If the packet size is misconfigured this warning may be logged very verbosely since this method is called many times during a large data transfer. I think we can move this warning to the DfsClientConf constructor, or maybe just remove it altogether and set packet size to PacketReceiver.MAX_PACKET_SIZE silently. Also the check can be moved to the DFSOutputStream constructor, so DFSOutputStream#writePacketSize is always capped at MAX_PACKET_SIZE. Also thanks for writing up the unit test. I am looking at the test case, hopefully it can be simplified to not use reflection.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Hi Xiaobing Zhou, thanks for updating the patch.

          The v2 patch seems to have many formatting changes. Can you please post a minimal patch without them?

          Show
          arpitagarwal Arpit Agarwal added a comment - Hi Xiaobing Zhou , thanks for updating the patch. The v2 patch seems to have many formatting changes. Can you please post a minimal patch without them?
          Hide
          xiaobingo Xiaobing Zhou added a comment - - edited

          Posted v1 to add some tests. Thanks Xiaoyu Yao

          Show
          xiaobingo Xiaobing Zhou added a comment - - edited Posted v1 to add some tests. Thanks Xiaoyu Yao
          Hide
          xyao Xiaoyu Yao added a comment -

          The calculation in my previous comment was incorrect. The actual number of packet and average packet size are the same with this change.

          The second Math.min may not be necessary assuming we have validate the WritePacketSize is configured properly.

          Show
          xyao Xiaoyu Yao added a comment - The calculation in my previous comment was incorrect. The actual number of packet and average packet size are the same with this change. The second Math.min may not be necessary assuming we have validate the WritePacketSize is configured properly.
          Hide
          xyao Xiaoyu Yao added a comment -

          Xiaobing Zhou, the patch solved the 2nd overflow issue introduced by HDFS-7308. However, it changes the number of packets and the packet size for large block size with the code below.

          final long psize = Math.min(blockSize - getStreamer().getBytesCurBlock(), dfsClient.getConf().getWritePacketSize());	      
          final int ipsize = (int) Math.min(psize, Integer.MAX_VALUE);
          computePacketChunkSize(psize, bytesPerChecksum);	
          

          After this change, a 3 GB block after patch v01 result in only 50 packets with average size of 60 MB. Before this change, a 3 GB block result in 2080930 packets with average size of 150 KB. I vaguely remember that DN has some limit on the maximum packet size (e.g., 16MB?). Can you check and ensure
          1) large block size works end-to-end as it was before HDFS-7308?
          2) performance is not degraded from /wo HDFS-7308 (2.6.0) to /w HDFS-7308+HDFS-11608.

          Show
          xyao Xiaoyu Yao added a comment - Xiaobing Zhou , the patch solved the 2nd overflow issue introduced by HDFS-7308 . However, it changes the number of packets and the packet size for large block size with the code below. final long psize = Math .min(blockSize - getStreamer().getBytesCurBlock(), dfsClient.getConf().getWritePacketSize()); final int ipsize = ( int ) Math .min(psize, Integer .MAX_VALUE); computePacketChunkSize(psize, bytesPerChecksum); After this change, a 3 GB block after patch v01 result in only 50 packets with average size of 60 MB. Before this change, a 3 GB block result in 2080930 packets with average size of 150 KB. I vaguely remember that DN has some limit on the maximum packet size (e.g., 16MB?). Can you check and ensure 1) large block size works end-to-end as it was before HDFS-7308 ? 2) performance is not degraded from /wo HDFS-7308 (2.6.0) to /w HDFS-7308 + HDFS-11608 .
          Hide
          xiaobingo Xiaobing Zhou added a comment -

          You are right Xiaoyu Yao, thanks, I've made the change in place.

          Show
          xiaobingo Xiaobing Zhou added a comment - You are right Xiaoyu Yao , thanks, I've made the change in place.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 24s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 14m 5s trunk passed
          +1 compile 0m 35s trunk passed
          +1 checkstyle 0m 17s trunk passed
          +1 mvnsite 0m 37s trunk passed
          +1 mvneclipse 0m 14s trunk passed
          +1 findbugs 1m 42s trunk passed
          +1 javadoc 0m 23s trunk passed
          +1 mvninstall 0m 35s the patch passed
          +1 compile 0m 43s the patch passed
          +1 javac 0m 43s the patch passed
          +1 checkstyle 0m 16s the patch passed
          +1 mvnsite 0m 43s the patch passed
          +1 mvneclipse 0m 13s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 2m 6s the patch passed
          +1 javadoc 0m 25s the patch passed
          +1 unit 1m 10s hadoop-hdfs-client in the patch passed.
          +1 asflicense 0m 20s The patch does not generate ASF License warnings.
          26m 30s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HDFS-11608
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12861531/HDFS-11608.000.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 1af2ec578c61 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 73835c7
          Default Java 1.8.0_121
          findbugs v3.0.0
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/18944/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/18944/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 24s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 14m 5s trunk passed +1 compile 0m 35s trunk passed +1 checkstyle 0m 17s trunk passed +1 mvnsite 0m 37s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 1m 42s trunk passed +1 javadoc 0m 23s trunk passed +1 mvninstall 0m 35s the patch passed +1 compile 0m 43s the patch passed +1 javac 0m 43s the patch passed +1 checkstyle 0m 16s the patch passed +1 mvnsite 0m 43s the patch passed +1 mvneclipse 0m 13s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 6s the patch passed +1 javadoc 0m 25s the patch passed +1 unit 1m 10s hadoop-hdfs-client in the patch passed. +1 asflicense 0m 20s The patch does not generate ASF License warnings. 26m 30s Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HDFS-11608 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12861531/HDFS-11608.000.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 1af2ec578c61 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 73835c7 Default Java 1.8.0_121 findbugs v3.0.0 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/18944/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client Console output https://builds.apache.org/job/PreCommit-HDFS-Build/18944/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks for the analysis Xiaobing Zhou.

          i.e. bodySize is 2147483615 as a result of (2147483648 - 33))

          Do you miss a minus sign before 2147483648? It should be -2147483648 - 33 overflow into positive 2147483615 as bodySize?

          Show
          xyao Xiaoyu Yao added a comment - Thanks for the analysis Xiaobing Zhou . i.e. bodySize is 2147483615 as a result of (2147483648 - 33)) Do you miss a minus sign before 2147483648? It should be -2147483648 - 33 overflow into positive 2147483615 as bodySize?
          Hide
          xiaobingo Xiaobing Zhou added a comment -

          Posted initial patch, will try to add some tests in the next.

          Show
          xiaobingo Xiaobing Zhou added a comment - Posted initial patch, will try to add some tests in the next.
          Hide
          xiaobingo Xiaobing Zhou added a comment - - edited

          After some debugging, it turns out it's related to Integer overflow. adjustChunkBoundary casts long to int in Math.min, resulting in one overflow (i.e. psize == -2147483648). Moreover, with the changes of computePacketChunkSize in HDFS-7308, (psize - PacketHeader.PKT_MAX_HEADER_LEN) leads to another overflow (i.e. bodySize is 2147483615 as a result of (-2147483648 - 33)), so chunksPerPacket == 4161789, packetSize == 516 * 4161789 == 2147483124, finally causing out-of-mem and invalid payload issues.

          Note that without HDFS-7308, Math.max(psize/chunkSize, 1) won't have another overflow, it gives out 1 which is good.

          the code in HDFS-7308

             private void computePacketChunkSize(int psize, int csize) {
          +    final int bodySize = psize - PacketHeader.PKT_MAX_HEADER_LEN;
               final int chunkSize = csize + getChecksumSize();
          -    chunksPerPacket = Math.max(psize/chunkSize, 1);
          +    chunksPerPacket = Math.max(bodySize/chunkSize, 1);
          

          DFSOutputStream#adjustChunkBoundary

          if (!getStreamer().getAppendChunk()) {
                int psize = Math.min((int)(blockSize- getStreamer().getBytesCurBlock()),
                    dfsClient.getConf().getWritePacketSize());
                computePacketChunkSize(psize, bytesPerChecksum);
              }
          
          Show
          xiaobingo Xiaobing Zhou added a comment - - edited After some debugging, it turns out it's related to Integer overflow. adjustChunkBoundary casts long to int in Math.min, resulting in one overflow (i.e. psize == -2147483648). Moreover, with the changes of computePacketChunkSize in HDFS-7308 , (psize - PacketHeader.PKT_MAX_HEADER_LEN) leads to another overflow (i.e. bodySize is 2147483615 as a result of (-2147483648 - 33)), so chunksPerPacket == 4161789, packetSize == 516 * 4161789 == 2147483124, finally causing out-of-mem and invalid payload issues. Note that without HDFS-7308 , Math.max(psize/chunkSize, 1) won't have another overflow, it gives out 1 which is good. the code in HDFS-7308 private void computePacketChunkSize( int psize, int csize) { + final int bodySize = psize - PacketHeader.PKT_MAX_HEADER_LEN; final int chunkSize = csize + getChecksumSize(); - chunksPerPacket = Math .max(psize/chunkSize, 1); + chunksPerPacket = Math .max(bodySize/chunkSize, 1); DFSOutputStream#adjustChunkBoundary if (!getStreamer().getAppendChunk()) { int psize = Math .min(( int )(blockSize- getStreamer().getBytesCurBlock()), dfsClient.getConf().getWritePacketSize()); computePacketChunkSize(psize, bytesPerChecksum); }
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Thanks for reporting this Xiaobing Zhou. Here are the client side exceptions from the original description (to keep the description concise).

          Client out-of-mem exception,

          17/03/30 07:13:50 WARN hdfs.DFSClient: Caught exception
          java.lang.InterruptedException
          	at java.lang.Object.wait(Native Method)
          	at java.lang.Thread.join(Thread.java:1245)
          	at java.lang.Thread.join(Thread.java:1319)
          	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:624)
          	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:592)
          	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
          Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
          	at org.apache.hadoop.hdfs.util.ByteArrayManager$NewByteArrayWithoutLimit.newByteArray(ByteArrayManager.java:308)
          	at org.apache.hadoop.hdfs.DFSOutputStream.createPacket(DFSOutputStream.java:197)
          	at org.apache.hadoop.hdfs.DFSOutputStream.writeChunkImpl(DFSOutputStream.java:1906)
          	at org.apache.hadoop.hdfs.DFSOutputStream.writeChunk(DFSOutputStream.java:1884)
          	at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:206)
          	at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:163)
          	at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:144)
          	at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2321)
          	at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2303)
          	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
          	at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
          	at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
          	at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:261)
          	at HdfsWriterOutputStream.run(HdfsWriterOutputStream.java:57)
          	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
          	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
          	at HdfsWriterOutputStream.main(HdfsWriterOutputStream.java:77)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:498)
          	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
          	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
          

          Client ResponseProcessor exception,

          17/03/30 18:20:12 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor exception  for block BP-1828245847-192.168.64.101-1490851685890:blk_1073741859_1040
          java.io.EOFException: Premature EOF: no length prefix available
          	at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2293)
          	at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244)
          	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:748)
          17/03/30 18:22:32 WARN hdfs.DFSClient: DataStreamer Exception
          java.io.IOException: Broken pipe
          	at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
          	at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
          	at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
          	at sun.nio.ch.IOUtil.write(IOUtil.java:65)
          	at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
          	at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
          	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
          	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
          	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
          	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
          	at java.io.DataOutputStream.write(DataOutputStream.java:107)
          	at org.apache.hadoop.hdfs.DFSPacket.writeTo(DFSPacket.java:176)
          	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:522)
          
          Show
          arpitagarwal Arpit Agarwal added a comment - Thanks for reporting this Xiaobing Zhou . Here are the client side exceptions from the original description (to keep the description concise). Client out-of-mem exception, 17/03/30 07:13:50 WARN hdfs.DFSClient: Caught exception java.lang.InterruptedException at java.lang. Object .wait(Native Method) at java.lang. Thread .join( Thread .java:1245) at java.lang. Thread .join( Thread .java:1319) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:624) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:592) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588) Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.hdfs.util.ByteArrayManager$NewByteArrayWithoutLimit.newByteArray(ByteArrayManager.java:308) at org.apache.hadoop.hdfs.DFSOutputStream.createPacket(DFSOutputStream.java:197) at org.apache.hadoop.hdfs.DFSOutputStream.writeChunkImpl(DFSOutputStream.java:1906) at org.apache.hadoop.hdfs.DFSOutputStream.writeChunk(DFSOutputStream.java:1884) at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:206) at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:163) at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:144) at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2321) at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2303) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106) at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244) at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:261) at HdfsWriterOutputStream.run(HdfsWriterOutputStream.java:57) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at HdfsWriterOutputStream.main(HdfsWriterOutputStream.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Client ResponseProcessor exception, 17/03/30 18:20:12 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block BP-1828245847-192.168.64.101-1490851685890:blk_1073741859_1040 java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2293) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:748) 17/03/30 18:22:32 WARN hdfs.DFSClient: DataStreamer Exception java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122) at java.io.DataOutputStream.write(DataOutputStream.java:107) at org.apache.hadoop.hdfs.DFSPacket.writeTo(DFSPacket.java:176) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:522)
          Hide
          xiaobingo Xiaobing Zhou added a comment -

          Posted it Wei-Chiu Chuang, thanks.

          Show
          xiaobingo Xiaobing Zhou added a comment - Posted it Wei-Chiu Chuang , thanks.
          Hide
          jojochuang Wei-Chiu Chuang added a comment -

          Hello Xiaobing Zhou. Thanks for filing the jira. If it is a critical bug, please post the details in the description.

          Thanks!

          Show
          jojochuang Wei-Chiu Chuang added a comment - Hello Xiaobing Zhou . Thanks for filing the jira. If it is a critical bug, please post the details in the description. Thanks!

            People

            • Assignee:
              xiaobingo Xiaobing Zhou
              Reporter:
              xiaobingo Xiaobing Zhou
            • Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development