Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Duplicate
-
0.12.3
-
None
-
None
-
None
Description
Patch submitted to HADOOP-893 (by me ) seemhave a bug in how it deals with the set deadNodes. After the patch, the seekToNewSource() looks like this :
public synchronized boolean seekToNewSource(long targetPos) throws IOException { boolean markedDead = deadNodes.contains(currentNode); deadNodes.add(currentNode); DatanodeInfo oldNode = currentNode; DatanodeInfo newNode = blockSeekTo(targetPos); if (!markedDead) { /* remove it from deadNodes. blockSeekTo could have cleared * deadNodes and added currentNode again. Thats ok. */ deadNodes.remove(oldNode); } // ...
I guess with the expectation that caller of this function decides before the call whether to put the node in deadNodes or not. I am not sure whether this was a bug then or not but it certainly seems to be bug now. i.e. when there is a checksum error with replica1, we try replica2 and if there a checksum error again, then we try replica1 again!
Note that ChecksumFileSystem.java was created after HADOOP-893 was resolved.
Attachments
Issue Links
- is part of
-
HADOOP-1134 Block level CRCs in HDFS
- Closed