Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-13830

Hbase REVERSED may throw Exception sometimes

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Invalid
    • 0.98.1
    • None
    • None
    • None

    Description

      run a scan at hbase shell command.

      scan 'analytics_access',{ENDROW=>'9223370603647713262-flume01.hadoop-10.32.117.111-373563509',LIMIT=>10,REVERSED=>true}
      

      will throw exception

      java.io.IOException: java.io.IOException: Could not seekToPreviousRow StoreFileScanner[HFileScanner for reader reader=hdfs://nameservice1/hbase/data/default/analytics_access/a54c47c568c00dd07f9d92cfab1accc7/cf/2e3a107e9fec4930859e992b61fb22f6, compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], firstKey=9223370603542781142-flume01.hadoop-10.32.117.111-378180911/cf:key/1433311994702/Put, lastKey=9223370603715515112-flume01.hadoop-10.32.117.111-370923552/cf:timestamp/1433139261951/Put, avgKeyLen=80, avgValueLen=115, entries=43544340, length=1409247455, cur=9223370603647710245-flume01.hadoop-10.32.117.111-373563545/cf:payload/1433207065597/Put/vlen=644/mvcc=0] to key 9223370603647710245-flume01.hadoop-10.32.117.111-373563545/cf:payload/1433207065597/Put/vlen=644/mvcc=0
      	at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:448)
      	at org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.seekToPreviousRow(ReversedKeyValueHeap.java:88)
      	at org.apache.hadoop.hbase.regionserver.ReversedStoreScanner.seekToPreviousRow(ReversedStoreScanner.java:128)
      	at org.apache.hadoop.hbase.regionserver.ReversedStoreScanner.seekToNextRow(ReversedStoreScanner.java:88)
      	at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:503)
      	at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
      	at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3866)
      	at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3946)
      	at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3814)
      	at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3805)
      	at org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3136)
      	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
      	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
      	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
      	at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
      	at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
      	at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
      	at java.lang.Thread.run(Thread.java:745)
      Caused by: java.io.IOException: On-disk size without header provided is 47701, but block header contains 10134. Block offset: -1, data starts with: DATABLK*\x00\x00'\x96\x00\x01\x00\x04\x00\x00\x00\x005\x96^\xD2\x01\x00\x00@\x00\x00\x00'
      	at org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:451)
      	at org.apache.hadoop.hbase.io.hfile.HFileBlock.access$400(HFileBlock.java:87)
      	at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1466)
      	at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1314)
      	at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:355)
      	at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:569)
      	at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:413)
      	... 17 more
      
      	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.6.0_65]
      	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) ~[na:1.6.0_65]
      	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) ~[na:1.6.0_65]
      	at java.lang.reflect.Constructor.newInstance(Constructor.java:513) ~[na:1.6.0_65]
      	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) ~[hadoop-common-2.3.0-cdh5.1.0.jar:na]
      	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) ~[hadoop-common-2.3.0-cdh5.1.0.jar:na]
      	at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:284) ~[hbase-client-0.98.1-cdh5.1.0.jar:na]
      	at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:198) ~[hbase-client-0.98.1-cdh5.1.0.jar:na]
      	at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:57) ~[hbase-client-0.98.1-cdh5.1.0.jar:na]
      	at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114) [hbase-client-0.98.1-cdh5.1.0.jar:na]
      	at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90) [hbase-client-0.98.1-cdh5.1.0.jar:na]
      	at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:336) [hbase-client-0.98.1-cdh5.1.0.jar:na]
      	at com.saic.bigdata.storm.test.HbaseGet2.main(HbaseGet2.java:29) [test-classes/:na]
      Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: java.io.IOException: Could not seekToPreviousRow StoreFileScanner[HFileScanner for reader reader=hdfs://nameservice1/hbase/data/default/analytics_access/a54c47c568c00dd07f9d92cfab1accc7/cf/2e3a107e9fec4930859e992b61fb22f6, compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], firstKey=9223370603542781142-flume01.hadoop-10.32.117.111-378180911/cf:key/1433311994702/Put, lastKey=9223370603715515112-flume01.hadoop-10.32.117.111-370923552/cf:timestamp/1433139261951/Put, avgKeyLen=80, avgValueLen=115, entries=43544340, length=1409247455, cur=9223370603647710245-flume01.hadoop-10.32.117.111-373563545/cf:payload/1433207065597/Put/vlen=644/mvcc=0] to key 9223370603647710245-flume01.hadoop-10.32.117.111-373563545/cf:payload/1433207065597/Put/vlen=644/mvcc=0
      	at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:448)
      	at org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.seekToPreviousRow(ReversedKeyValueHeap.java:88)
      	at org.apache.hadoop.hbase.regionserver.ReversedStoreScanner.seekToPreviousRow(ReversedStoreScanner.java:128)
      	at org.apache.hadoop.hbase.regionserver.ReversedStoreScanner.seekToNextRow(ReversedStoreScanner.java:88)
      	at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:503)
      	at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
      	at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3866)
      	at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3946)
      	at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3814)
      	at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3805)
      	at org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3136)
      	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
      	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
      	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
      	at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
      	at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
      	at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
      	at java.lang.Thread.run(Thread.java:745)
      Caused by: java.io.IOException: On-disk size without header provided is 47701, but block header contains 10134. Block offset: -1, data starts with: DATABLK*\x00\x00'\x96\x00\x01\x00\x04\x00\x00\x00\x005\x96^\xD2\x01\x00\x00@\x00\x00\x00'
      	at org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:451)
      	at org.apache.hadoop.hbase.io.hfile.HFileBlock.access$400(HFileBlock.java:87)
      	at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1466)
      	at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1314)
      	at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:355)
      	at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:569)
      	at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:413)
      	... 17 more
      
      	at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453) ~[hbase-client-0.98.1-cdh5.1.0.jar:na]
      	at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1657) ~[hbase-client-0.98.1-cdh5.1.0.jar:na]
      	at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1715) ~[hbase-client-0.98.1-cdh5.1.0.jar:na]
      	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:29900) ~[hbase-protocol-0.98.1-cdh5.1.0.jar:na]
      	at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:168) ~[hbase-client-0.98.1-cdh5.1.0.jar:na]
      	... 5 common frames omitted
      

      Attachments

        Activity

          People

            Unassigned Unassigned
            sunny.davy ryan.jin
            Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: