Details
-
Bug
-
Status: Closed
-
Critical
-
Resolution: Fixed
-
0.94.11, 0.98.6.1
-
None
-
None
-
I can reproduce it on a simple 2 node cluster, one running the master and another running a RS. I was testing on ec2.
I used the following configurations for the cluster.
hbase-env:HBASE_REGIONSERVER_OPTS=-Xmx2G -XX:MaxDirectMemorySize=5G -XX:CMSInitiatingOccupancyFraction=88 -XX:+AggressiveOpts -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xlog gc:/tmp/hbase-regionserver-gc.log
hbase-site:hbase.bucketcache.ioengine=offheap
hbase.bucketcache.size=4196
hbase.rs.cacheblocksonwrite=true
hfile.block.index.cacheonwrite=true
hfile.block.bloom.cacheonwrite=trueI can reproduce it on a simple 2 node cluster, one running the master and another running a RS. I was testing on ec2. I used the following configurations for the cluster. hbase-env:HBASE_REGIONSERVER_OPTS=-Xmx2G -XX:MaxDirectMemorySize=5G -XX:CMSInitiatingOccupancyFraction=88 -XX:+AggressiveOpts -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xlog gc:/tmp/hbase-regionserver-gc.log hbase-site: hbase.bucketcache.ioengine=offheap hbase.bucketcache.size=4196 hbase.rs.cacheblocksonwrite=true hfile.block.index.cacheonwrite=true hfile.block.bloom.cacheonwrite=true
-
Reviewed
Description
In my experiments, I have writers streaming their output to HBase. The reader powers a web page and does this scatter/gather, where it reads 1000 keys written last and passes them the the front end. With this workload, I get the exception below at the region server. Again, I am using HBAse (0.98.6.1). Any help is appreciated.
2014-10-10 15:06:44,173 ERROR [B.DefaultRpcServer.handler=62,queue=2,port=60020] ipc.RpcServer: Unexpected throwable object
java.lang.IllegalArgumentException
at java.nio.Buffer.position(Buffer.java:236)
at org.apache.hadoop.hbase.util.ByteBufferUtils.skip(ByteBufferUtils.java:434)
at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readKeyValueLen(HFileReaderV2.java:849)
at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:760)
at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:248)
at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
at org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:317)
at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:176)
at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1780)
at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:3758)
at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1950)
at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1936)
at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1913)
at org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3157)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29587)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
at java.lang.Thread.run(Thread.java:744)
Attachments
Attachments
Issue Links
- is related to
-
HBASE-12531 bug in cachedataonwrite
- Closed