Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
0.21.0
-
None
-
None
-
None
-
dfs.support.append=true
Current branch-0.21 of hdfs, mapreduce, and common. Here is svn info:
URL: https://svn.apache.org/repos/asf/hadoop/hdfs/branches/branch-0.21
Repository Root: https://svn.apache.org/repos/asf
Repository UUID: 13f79535-47bb-0310-9956-ffa450edef68
Revision: 827883
Node Kind: directory
Schedule: normal
Last Changed Author: szetszwo
Last Changed Rev: 826906
Last Changed Date: 2009-10-20 00:16:25 +0000 (Tue, 20 Oct 2009)dfs.support.append=true Current branch-0.21 of hdfs, mapreduce, and common. Here is svn info: URL: https://svn.apache.org/repos/asf/hadoop/hdfs/branches/branch-0.21 Repository Root: https://svn.apache.org/repos/asf Repository UUID: 13f79535-47bb-0310-9956-ffa450edef68 Revision: 827883 Node Kind: directory Schedule: normal Last Changed Author: szetszwo Last Changed Rev: 826906 Last Changed Date: 2009-10-20 00:16:25 +0000 (Tue, 20 Oct 2009)
Description
Running some loading tests against hdfs branch-0.21 I got the following:
2009-10-21 04:57:10,770 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_6345892463926159834_1030 src: /XX.XX.XX.141:53112 dest: /XX.XX.XX.140:51010 2009-10-21 04:57:10,771 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock blk_6345892463926159834_1030 received exception org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Block blk_6345892463926159834_1030 already exists in state RBW and thus cannot be created. 2009-10-21 04:57:10,771 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(XX.XX.XX.140:51010, storageID=DS-1292310101-XX.XX.XX.140-51010-1256100924816, infoPort=51075, ipcPort=51020):DataXceiver org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Block blk_6345892463926159834_1030 already exists in state RBW and thus cannot be created. at org.apache.hadoop.hdfs.server.datanode.FSDataset.createTemporary(FSDataset.java:1324) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:98) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:258) at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:382) at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:323) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:111) at java.lang.Thread.run(Thread.java:619)
On the sender side:
2009-10-21 04:57:10,740 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(XX.XX.XX.141:51010, storageID=DS-1870884070-XX.XX.XX.141-51010-1256100925196, infoPort=51075, ipcPort=51020) Starting thread to transfer block blk_6345892463926159834_1030 to XX.XX.XX.140:51010 2009-10-21 04:57:10,770 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(XX.XX.XX.141:51010, storageID=DS-1870884070-XX.XX.XX.141-51010-1256100925196, infoPort=51075, ipcPort=51020):Failed to transfer blk_6345892463926159834_1030 to XX.XX.XX.140:51010 got java.net.SocketException: Original Exception : java.io.IOException: Connection reset by peer at sun.nio.ch.FileChannelImpl.transferTo0(Native Method) at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:415) at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:516) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:199) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:434) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1262) at java.lang.Thread.run(Thread.java:619) Caused by: java.io.IOException: Connection reset by peer ... 8 more
The block sequence number, 1030, is one more than that in issue HDFS-720 (same test run but about 8 seconds between errors.
Attachments
Issue Links
- causes
-
HDFS-16064 Determine when to invalidate corrupt replicas based on number of usable replicas
- Resolved
- is duplicated by
-
HDFS-6123 Improve datanode error messages
- Closed