Details
-
Bug
-
Status: Resolved
-
Critical
-
Resolution: Duplicate
-
0.20.1
-
None
Description
HDFS write pipeline does not select the correct datanode in some error cases. One example : say DN2 is the second datanode and write to it times out since it is in a bad state.. pipeline actually removes the first datanode. If such a datanode happens to be the last one in the pipeline, write is aborted completely with a hard error.
Essentially the error occurs when writing to a downstream datanode fails rather than reading. This bug was actually fixed in 0.18 (HADOOP-3339). But HADOOP-1700 essentially reverted it. I am not sure why.
It is absolutely essential for HDFS to handle failures on subset of datanodes in a pipeline. We should not have at least known bugs that lead to hard failures.
I will attach patch for a hack that illustrates this problem. Still thinking of how an automated test would look like for this one.
My preferred target for this fix is 0.20.1.
Attachments
Attachments
Issue Links
- duplicates
-
HDFS-101 DFS write pipeline : DFSClient sometimes does not detect second datanode failure
- Closed