Details
-
Bug
-
Status: Resolved
-
Critical
-
Resolution: Fixed
-
2.8.0
-
None
-
Reviewed
Description
We've seen HDFS write crashes in the case of huge block size. For example, writing a 3 GB file using block size > 2 GB (e.g., 3 GB), HDFS client throws out of memory exception. DataNode gives out IOException. After changing heap size limit, DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe and pipeline recovery.
Give below:
DN exception,
2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.168.64.101:47167 dst: /192.168.64.101:50010 java.io.IOException: Incorrect value for packet payload size: 2147483128 at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745)
Attachments
Attachments
Issue Links
- breaks
-
HDFS-11766 Fix findbugs warning in branch-2.7
- Resolved