Description
On 32 bit JVM, SocketOutputStream.transferToFully() fails if the block size is >= 2GB. We should fall back to a normal transfer in this case.
2010-12-02 19:04:23,490 ERROR datanode.DataNode (BlockSender.java:sendChunks(399)) - BlockSender.sendChunks() exception: java.io.IOException: Value too large for defined data type at sun.nio.ch.FileChannelImpl.transferTo0(Native Method) at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:418) at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:519) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:204) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:386) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:475) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opReadBlock(DataXceiver.java:196) at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opReadBlock(DataTransferProtocol.java:356) at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:328) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130) at java.lang.Thread.run(Thread.java:619)