Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-2095

org.apache.hadoop.hdfs.server.datanode.DataNode#checkDiskError produces check storm making data node unavailable

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • 0.21.0
    • None
    • datanode
    • None

    Description

      I can see that if data node receives some IO error, this can cause checkDir storm.
      What I mean:
      1) any error produces DataNode.checkDiskError call
      2) this call locks volume:
      java.lang.Thread.State: RUNNABLE
      at java.io.UnixFileSystem.getBooleanAttributes0(Native Method)
      at java.io.UnixFileSystem.getBooleanAttributes(UnixFileSystem.java:228)
      at java.io.File.exists(File.java:733)
      at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsCheck(DiskChecker.java:65)
      at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:86)
      at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.checkDirTree(FSDataset.java:228)
      at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.checkDirTree(FSDataset.java:232)
      at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.checkDirTree(FSDataset.java:232)
      at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.checkDirTree(FSDataset.java:232)
      at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolume.checkDirs(FSDataset.java:414)
      at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolumeSet.checkDirs(FSDataset.java:617)

      • locked <0x000000080a8faec0> (a org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolumeSet)
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.checkDataDir(FSDataset.java:1681)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError(DataNode.java:745)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError(DataNode.java:735)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.close(BlockReceiver.java:202)
        at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:151)
        at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:167)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:646)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:352)
        at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:390)
        at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:331)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:111)
        at java.lang.Thread.run(Thread.java:619)

      3) This produces timeouts on other calls, e.g.
      2011-06-17 17:35:03,922 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: checkDiskError: exception:
      java.io.InterruptedIOException
      at java.io.FileOutputStream.writeBytes(Native Method)
      at java.io.FileOutputStream.write(FileOutputStream.java:260)
      at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
      at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
      at java.io.DataOutputStream.flush(DataOutputStream.java:106)
      at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.close(BlockReceiver.java:183)
      at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:151)
      at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:167)
      at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:646)
      at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:352)
      at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:390)
      at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:331)
      at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:111)
      at java.lang.Thread.run(Thread.java:619)

      4) This, in turn, produces more "dir check calls".

      5) All the cluster works very slow because of half-working node.

      Attachments

        1. HDFS-2095.patch
          2 kB
          Rohit Kochar
        2. patch.diff
          1 kB
          Vitalii Tymchyshyn
        3. patch2.diff
          1 kB
          Vitalii Tymchyshyn
        4. pathch3.diff
          3 kB
          Vitalii Tymchyshyn

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            tlipcon Todd Lipcon
            tivv Vitalii Tymchyshyn
            Votes:
            0 Vote for this issue
            Watchers:
            11 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment