The error messages are being logged by the Hdfs process, it seems to be coming from
hdfsOpenFileImpl( ) when Hdfs tries to check if readDirect() for a given file is possible.
The readDirect() encounters an exception as shown below;
Error message logged
readDirect: FSDataInputStream#read error:
UnsupportedOperationException: Byte-buffer read unsupported by input streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported by input stream
▸ at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
Reference code : http://github.mtv.cloudera.com/CDH/hadoop/blob/cdh5-2.6.0/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c#L1173
Since this issue does not happen during every read, it should not be that expensive. However, if there are large number of files, excessive logging may have performance issues but in most cases it may be a red-herring.Hence, Hdfs Team should look into this issue to reduce this error logging as it seems to be quite frequent in a S3 set up when Impala is used.