Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
2.0.0-beta-2
-
None
-
None
-
Reviewed
Description
Log from long running test has following stack trace a few times:
2018-04-09 18:33:21,523 WARN org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988, offset=36884200, end=231005989 java.lang.IllegalArgumentException at java.nio.Buffer.limit(Buffer.java:275) at org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183) at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831) at org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594) at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488) at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
Size on disk calculations seem to get messed up due to encryption. Possible fixes can be:
- if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
- document that hbase.rs.prefetchblocksonopen cannot be true if file is encrypted.
Attachments
Attachments
Issue Links
- is related to
-
HADOOP-15557 CryptoInputStream can't handle concurrent access; inconsistent with HDFS
- Open