Description
yeshavora found two cases that there are unnecessary exception stack trace in datanode log:
- SocketTimeoutException
2014-03-07 03:30:44,567 INFO datanode.DataNode (BlockSender.java:sendPacket(563)) - exception: java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/xx.xx.xx.xx:1019 remote=/xx.xx.xx.xx:37997] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) ...
- ReplicaAlreadyExistsException
2014-03-07 03:02:39,334 ERROR datanode.DataNode (DataXceiver.java:run(234)) - xx.xx.xx.xx:1019:DataXceiver error processing WRITE_BLOCK operation src: /xx.xx.xx.xx:32959 dest: /xx.xx.xx.xx:1019 org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Block BP-1409640778-xx.xx.xx.xx-1394150965191:blk_1073742158_1334 already exists in state TEMPORARY and thus cannot be created. at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:874) ...
Both cases are normal. They are not bugs.
Attachments
Attachments
Issue Links
- duplicates
-
HDFS-721 ERROR Block blk_XXX_1030 already exists in state RBW and thus cannot be created
- Resolved