Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
-
None
Description
>> Copy from dev list:
Our cluster has 4 nodes and i set the mapred.subimt.replication parameter to 2 on all nodes and the master. Everything has been restarted.
Unfortuantely, we still have the same exception :
2007-09-05 17:01:59,623 ERROR org.apache.hadoop.dfs.DataNode:
DataXceiver: java.io.IOException: Block blk_-5969983648201186681 is valid, and cannot be written to.
at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:515)
at
org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:822)
at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:727)
at java.lang.Thread.run(Thread.java:595)
>> end of copy
The message shows that the namenode schedules to replicate a block to a datanode that already holds the block. The namenode block placement algorithm makes sure that it does not schedule a block to a datanode that is confirmed to hold a replica of the block. But it is not aware of any in-transit block placements (i.e. the scheduled but not confirmed block placements), so occasionally we may still see "is valid, and cannot be written to" errors.
A fix to the problem is to keep track of all in-transit block placements, and the block placement algorithm considers these to-be-confirmed replicas as well.
Attachments
Issue Links
- depends upon
-
HADOOP-1946 du should be not called on every heartbeat
- Closed