Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.4.0
-
Reviewed
Description
In our production environment, if the hadoop process is started in a non-English environment, many unexpected error logs will be printed. The following is the error message printed by datanode.
```
2023-05-01 09:10:50,299 ERROR org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op transferToSocketFully : 断开的管道
2023-05-01 09:10:50,299 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks() exception:
java.io.IOException: 断开的管道
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:242)
at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:260)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:559)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:801)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:755)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:580)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:258)
at java.lang.Thread.run(Thread.java:745)
2023-05-01 09:10:50,298 ERROR org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op transferToSocketFully : 断开的管道
2023-05-01 09:10:50,298 ERROR org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op transferToSocketFully : 断开的管道
2023-05-01 09:10:50,298 ERROR org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op transferToSocketFully : 断开的管道
2023-05-01 09:10:50,298 ERROR org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op transferToSocketFully : 断开的管道
2023-05-01 09:10:50,302 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks() exception:
java.io.IOException: 断开的管道
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:242)
at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:260)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:559)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:801)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:755)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:580)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:258)
at java.lang.Thread.run(Thread.java:745)
2023-05-01 09:10:50,303 ERROR org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op transferToSocketFully : 断开的管道
2023-05-01 09:10:50,303 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks() exception:
java.io.IOException: 断开的管道
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:242)
at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:260)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:568)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:801)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:755)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:580)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:258)
at java.lang.Thread.run(Thread.java:745)
```
The reason for this situation is that the code uses the message of IOException to determine whether to print Exception logs, but different locales will change the content of the message.
This large number of error logs is very misleading, so this patch sets the environment variable LANG in hadoop-env.sh.
Attachments
Issue Links
- links to