Details
Description
When non-leaf datanode in a pipeline is slow on or stuck at disk I/O, the downstream node can timeout on reading packet since even the heartbeat packets will not be relayed down.
The packet read timeout is set in DataXceiver#run():
peer.setReadTimeout(dnConf.socketTimeout);
When the downstream node times out and closes the connection to the upstream, the upstream node's PacketResponder gets EOFException and it sends an ack upstream with the downstream node status set to ERROR. This caused the client to exclude the downstream node, even though the upstream node was the one got stuck.
The connection to downstream has longer timeout, so the downstream will always timeout first. The downstream timeout is set in writeBlock()
int timeoutValue = dnConf.socketTimeout + (HdfsConstants.READ_TIMEOUT_EXTENSION * targets.length); int writeTimeout = dnConf.socketWriteTimeout + (HdfsConstants.WRITE_TIMEOUT_EXTENSION * targets.length); NetUtils.connect(mirrorSock, mirrorTarget, timeoutValue); OutputStream unbufMirrorOut = NetUtils.getOutputStream(mirrorSock, writeTimeout);
Attachments
Attachments
Issue Links
- duplicates
-
HDFS-5032 Write pipeline failures caused by slow or busy disk may not be handled properly.
- Resolved