-
Type:
Bug
-
Status: Resolved
-
Priority:
Major
-
Resolution: Fixed
-
Affects Version/s: 2.7.3, 3.0.0-alpha2
-
Fix Version/s: 2.9.0, 3.0.0-alpha4
-
Component/s: datanode
-
Labels:None
-
Target Version/s:
The test fails after the following error messages:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:57377,DS-b4ec61fc-657c-4e2a-9dc3-8d93b7769a2b,DISK], DatanodeInfoWithStorage[127.0.0.1:47448,DS-18bca8d7-048d-4d7f-9594-d2df16096a3d,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:57377,DS-b4ec61fc-657c-4e2a-9dc3-8d93b7769a2b,DISK], DatanodeInfoWithStorage[127.0.0.1:47448,DS-18bca8d7-048d-4d7f-9594-d2df16096a3d,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1280) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1354) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1512) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1236) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:721)
In such case, the DataNode that has removed can not be used in the pipeline recovery.