Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
2.0.0-alpha
-
Reviewed
Description
After adding the timeout to TestUnderReplicatedBlocks in HDFS-4061 we can see the root cause of the failure is ReplicaAlreadyExistsException:
org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Block BP-1541130889-172.29.121.238-1350435573411:blk_-3437032108997618258_1002 already exists in state FINALIZED and thus cannot be created. at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:799) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:90) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:155) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:393) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:98) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:66) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:219)