Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-56

Datanodes get error message "is valid, and cannot be written to"

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • None
    • None

    Description

      >> Copy from dev list:
      Our cluster has 4 nodes and i set the mapred.subimt.replication parameter to 2 on all nodes and the master. Everything has been restarted.
      Unfortuantely, we still have the same exception :

      2007-09-05 17:01:59,623 ERROR org.apache.hadoop.dfs.DataNode:
      DataXceiver: java.io.IOException: Block blk_-5969983648201186681 is valid, and cannot be written to.
      at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:515)
      at
      org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:822)
      at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:727)
      at java.lang.Thread.run(Thread.java:595)
      >> end of copy

      The message shows that the namenode schedules to replicate a block to a datanode that already holds the block. The namenode block placement algorithm makes sure that it does not schedule a block to a datanode that is confirmed to hold a replica of the block. But it is not aware of any in-transit block placements (i.e. the scheduled but not confirmed block placements), so occasionally we may still see "is valid, and cannot be written to" errors.

      A fix to the problem is to keep track of all in-transit block placements, and the block placement algorithm considers these to-be-confirmed replicas as well.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              hairong Hairong Kuang
              Votes:
              2 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated: