Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-990

Datanode doesn't retry when write to one (full)drive fail

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.12.0
    • Component/s: None
    • Labels:
      None

      Description

      When one drive is 99.9% full and datanode choose that drive to write, it fails with

      2007-02-07 18:16:56,574 WARN org.apache.hadoop.dfs.DataNode: DataXCeiver
      org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: No space left on device
      at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:801)
      at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:563)
      at java.lang.Thread.run(Thread.java:595)

      Combined with HADOOP-940, these failed blocks stay under-replicated.

        Attachments

        1. HADOOP-990-1.patch
          2 kB
          Raghu Angadi
        2. HADOOP-990-2.patch
          0.6 kB
          Raghu Angadi
        3. HADOOP-990-3.patch
          2 kB
          Raghu Angadi

          Activity

            People

            • Assignee:
              rangadi Raghu Angadi
              Reporter:
              knoguchi Koji Noguchi
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: