Uploaded image for project: 'Phoenix'
  1. Phoenix
  2. PHOENIX-3022

Phoenix-qs is raising StaleRegionBoundaryCacheException when querying merging regions

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 4.7.0
    • 4.7.0
    • None
    • None

    Description

      Phoenix-qs is raising the following error when querying a table that's splitting & merging regions:

      2016-06-08 09:52:46,623|beaver.machine|INFO|9389|139653045778176|MainThread|2/2          SELECT COUNT(unsig_long_id) AS Result FROM SECONDARY_LARGE_TABLE AS S INNER JOIN GIGANTIC_TABLE AS L ON S.sec_id=L.id GROUP BY unsig_long_id ORDER BY unsig_long_id DESC;
      2016-06-08 09:52:50,096|beaver.machine|INFO|9389|139653045778176|MainThread|Error: Error -1 (00000) : Error while executing SQL "SELECT COUNT(unsig_long_id) AS Result FROM SECONDARY_LARGE_TABLE AS S INNER JOIN GIGANTIC_TABLE AS L ON S.sec_id=L.id GROUP BY unsig_long_id ORDER BY unsig_long_id DESC": Remote driver error: RuntimeException: java.sql.SQLException: Encountered exception in sub plan [0] execution. -> SQLException: Encountered exception in sub plan [0] execution. -> PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: GIGANTIC_TABLE,\x80\x01\xE8\x87,1465379557445.b8d953a7975b27ed2dd55936bea92d7d.: null
      2016-06-08 09:52:50,096|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
      2016-06-08 09:52:50,097|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:53)
      2016-06-08 09:52:50,097|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:444)
      2016-06-08 09:52:50,097|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
      2016-06-08 09:52:50,097|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2402)
      2016-06-08 09:52:50,097|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
      2016-06-08 09:52:50,097|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
      2016-06-08 09:52:50,098|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
      2016-06-08 09:52:50,101|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
      2016-06-08 09:52:50,102|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
      2016-06-08 09:52:50,102|beaver.machine|INFO|9389|139653045778176|MainThread|at java.lang.Thread.run(Thread.java:745)
      2016-06-08 09:52:50,103|beaver.machine|INFO|9389|139653045778176|MainThread|Caused by: java.lang.NullPointerException
      2016-06-08 09:52:50,103|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1397)
      2016-06-08 09:52:50,103|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
      2016-06-08 09:52:50,103|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1504)
      2016-06-08 09:52:50,103|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:439)
      2016-06-08 09:52:50,103|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:713)
      2016-06-08 09:52:50,103|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.next(HFileReaderV2.java:1256)
      2016-06-08 09:52:50,104|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:296)
      2016-06-08 09:52:50,104|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:194)
      2016-06-08 09:52:50,104|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.seekOrReseekToProperKey(LocalIndexStoreFileScanner.java:235)
      2016-06-08 09:52:50,112|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.seekOrReseek(LocalIndexStoreFileScanner.java:220)
      2016-06-08 09:52:50,113|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.reseek(LocalIndexStoreFileScanner.java:94)
      2016-06-08 09:52:50,113|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
      2016-06-08 09:52:50,113|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:312)
      2016-06-08 09:52:50,113|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:268)
      2016-06-08 09:52:50,113|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:815)
      2016-06-08 09:52:50,114|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:792)
      2016-06-08 09:52:50,114|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:592)
      2016-06-08 09:52:50,114|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
      2016-06-08 09:52:50,114|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5699)
      2016-06-08 09:52:50,114|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5850)
      2016-06-08 09:52:50,115|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5637)
      2016-06-08 09:52:50,115|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:414)
      2016-06-08 09:52:50,122|beaver.machine|INFO|9389|139653045778176|MainThread|... 8 more
      2016-06-08 09:52:50,122|beaver.machine|INFO|9389|139653045778176|MainThread|-> ExecutionException: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: GIGANTIC_TABLE,\x80\x01\xE8\x87,1465379557445.b8d953a7975b27ed2dd55936bea92d7d.: null
      2016-06-08 09:52:50,122|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
      2016-06-08 09:52:50,122|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:53)
      2016-06-08 09:52:50,124|beaver.machine|INFO|9389|139653045778176|MainThread|at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:444)
      

      If we look into the phoenix-qs logs, we can find the following message:

      2016-06-08 09:52:50,021 INFO org.apache.phoenix.iterate.BaseResultIterators: Failed to execute task during cancel
      java.util.concurrent.ExecutionException: org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 (XCL08): Cache of region boundaries are out of date.
      	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
      	at java.util.concurrent.FutureTask.get(FutureTask.java:192)
      	at org.apache.phoenix.iterate.BaseResultIterators.close(BaseResultIterators.java:863)
      	at org.apache.phoenix.iterate.RoundRobinResultIterator.close(RoundRobinResultIterator.java:125)
      	at org.apache.phoenix.iterate.RoundRobinResultIterator.fetchNextBatch(RoundRobinResultIterator.java:260)
      	at org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:174)
      	at org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
      	at org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:107)
      	at org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:83)
      	at org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:385)
      	at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:167)
      	at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:163)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      	at java.lang.Thread.run(Thread.java:745)
      Caused by: org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 (XCL08): Cache of region boundaries are out of date.
      	at org.apache.phoenix.exception.SQLExceptionCode$13.newException(SQLExceptionCode.java:340)
      	at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
      	at org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:129)
      	at org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:118)
      	at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107)
      	at org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:190)
      	at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
      	at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
      	... 5 more
      2016-06-08 09:53:00,827 INFO org.apache.hadoop.hbase.client.HBaseAdmin: Started disable of GIGANTIC_TABLE_INDEX
      

      Attachments

        Activity

          People

            Unassigned Unassigned
            speleato Sergio Peleato
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: