Details
-
New Feature
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
0.23.0
-
None
-
Incompatible change, Reviewed
-
Description
In the current design, if there is a datanode/network failure in the write pipeline, DFSClient will try to remove the failed datanode from the pipeline and then continue writing with the remaining datanodes. As a result, the number of datanodes in the pipeline is decreased. Unfortunately, it is possible that DFSClient may incorrectly remove a healthy datanode but leave the failed datanode in the pipeline because failure detection may be inaccurate under erroneous conditions.
We propose to have a new mechanism for adding new datanodes to the pipeline in order to provide a stronger data guarantee.
Attachments
Attachments
Issue Links
- is related to
-
HDFS-1595 DFSClient may incorrectly detect datanode failure
- Resolved
-
HDFS-1599 Umbrella Jira for Improving HBASE support in HDFS
- Open
-
HDFS-265 Revisit append
- Closed
-
HDFS-1785 Cleanup BlockReceiver and DataXceiver
- Closed
-
HDFS-1789 Refactor frequently used codes from DFSOutputStream, BlockReceiver and DataXceiver
- Closed
-
HDFS-1817 Split TestFiDataTransferProtocol.java into two files
- Closed