Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
-
None
Description
During writing a block to data nodes, if the dfs client detects a bad data node in the write pipeline, it will re-construct a new data pipeline,
excluding the detected bad data node. This implies that when the client finishes writing the block, the number of the replicas for the block
may be lower than the intended replication factor. If the ratio of the number of replicas to the intended replication factor is lower than
certain threshold (say 0.68), then the client should send a request to the name node to replicate that block immediately.