Details
-
Bug
-
Status: Resolved
-
Blocker
-
Resolution: Duplicate
-
3.0.0-beta1
-
None
-
None
-
None
Description
Fails consistently in trunk with the following exception:
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 71.317 sec <<< FAILURE! - in org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery testZeroByteBlockRecovery(org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery) Time elapsed: 11.422 sec <<< ERROR! java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:63722,DS-9befc828-8ff7-4284-8fba-a6c55627ab3d,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:63722,DS-9befc828-8ff7-4284-8fba-a6c55627ab3d,DISK]]). The current failed datanode replacement policy is ALWAYS, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1321) at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1387) at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1586) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1487) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1469) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1273) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:684)
Attachments
Issue Links
- duplicates
-
HDFS-12378 TestClientProtocolForPipelineRecovery#testZeroByteBlockRecovery fails on trunk
- Resolved