Details
-
Bug
-
Status: Closed
-
Blocker
-
Resolution: Fixed
-
0.15.0
-
None
-
None
Description
I have 2 data-nodes, one of which is trying to replicate blocks to another.
The second data-node throws the following excpetion for every replicated block.
07/10/09 20:36:39 INFO dfs.DataNode: Received block blk_-8942388986043611634 from /a.d.d.r:43159 07/10/09 20:36:39 WARN dfs.DataNode: Error writing reply back to /a.d.d.r:43159for writing block blk_-8942388986043611634 07/10/09 20:36:39 WARN dfs.DataNode: java.net.SocketException: Broken pipe at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:115) at java.io.DataOutputStream.writeShort(DataOutputStream.java:151) at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:939) at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:763) at java.lang.Thread.run(Thread.java:619)
- It looks like that the first data-node does not expect to receive anything from the second one and closes the connection.
- There should be a space in front of
+ "for writing block " + block );
- The port number is misleading in these messages. DataXceivers open sockets on different ports every time, which is
different from the data-node's main port. So we should rather print here the main port in order to be able to recognize
wich data-node the block was sent from.
Is this related to HADOOP-1908?