Details
Description
Problem:
------------
There is excessive error logging when a file is opened by libhdfs (DFSClient/HDFS) in S3 environment, this issue is caused because buffered read is not supported in S3 environment, HADOOP-14603 "S3A input stream to support ByteBufferReadable"
The following message is printed repeatedly in the error log/ to STDERR:
-------------------------------------------------------------------------------------------------- UnsupportedOperationException: Byte-buffer read unsupported by input streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported by input stream at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
Root cause
After investigating the issue, it appears that the above exception is printed because
when a file is opened via hdfsOpenFileImpl() calls readDirect() which is hitting this
exception.
Fix:
Since the hdfs client is not initiating the byte buffered read but is happening in a implicit manner, we should not be generating the error log during open of a file.
Attachments
Attachments
Issue Links
- is related to
-
HDFS-14111 hdfsOpenFile on HDFS causes unnecessary IO from file offset 0
- Resolved