Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-13524

Occasional "All datanodes are bad" error in TestLargeBlock#testLargeBlockSize

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.8.0, 3.0.0-alpha1
    • 3.2.0, 3.1.1, 3.0.4
    • None
    • None

    Description

      TestLargeBlock#testLargeBlockSize may fail with error:

      All datanodes [DatanodeInfoWithStorage[127.0.0.1:44968,DS-acddd79e-cdf1-4ac5-aac5-e804a2e61600,DISK]] are bad. Aborting...

      Tracing back, the error is due to the stress applied to the host sending a 2GB block, causing write pipeline ack read timeout:

      2017-09-10 22:16:07,285 [DataXceiver for client DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794 [Receiving block BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] INFO datanode.DataNode (DataXceiver.java:writeBlock(742)) - Receiving BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001 src: /127.0.0.1:57794 dest: /127.0.0.1:44968
      2017-09-10 22:16:50,402 [DataXceiver for client DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794 [Receiving block BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] WARN datanode.DataNode (BlockReceiver.java:flushOrSync(434)) - Slow flushOrSync took 5383ms (threshold=300ms), isSync:false, flushTotalNanos=5383638982ns, volume=file:/tmp/tmp.1oS3ZfDCwq/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/
      2017-09-10 22:17:54,427 [ResponseProcessor for block BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001] WARN hdfs.DataStreamer (DataStreamer.java:run(1214)) - Exception for BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001
      java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/127.0.0.1:57794 remote=/127.0.0.1:44968]
      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
      at java.io.FilterInputStream.read(FilterInputStream.java:83)
      at java.io.FilterInputStream.read(FilterInputStream.java:83)
      at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:434)
      at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
      at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1104)
      2017-09-10 22:17:54,432 [DataXceiver for client DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794 [Receiving block BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] INFO datanode.DataNode (BlockReceiver.java:receiveBlock(1000)) - Exception for BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001
      java.io.IOException: Connection reset by peer

      Instead of raising read timeout, I suggest increasing cluster size from default=1 to 3, so that it has the opportunity to choose a different DN and retry.

      Suspect this fails after HDFS-13103, in Hadoop 2.8/3.0.0-alpha1 when we introduced client acknowledgement read timeout.

      Attachments

        1. HDFS-13524.001.patch
          1 kB
          Siyao Meng
        2. HDFS-13524.002.patch
          2 kB
          Siyao Meng

        Issue Links

          Activity

            People

              smeng Siyao Meng
              weichiu Wei-Chiu Chuang
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: