Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
Description
This causes a RegionDestroyedException like this when executing a query containing a != clause:
Exception in thread "main" org.apache.geode.cache.client.ServerOperationException: remote server on 10.166.145.16(client:27461:loner):58776:dfd3ba27:client: While performing a remote query at org.apache.geode.cache.client.internal.AbstractOp.processChunkedResponse(AbstractOp.java:342) at org.apache.geode.cache.client.internal.QueryOp$QueryOpImpl.processResponse(QueryOp.java:168) at org.apache.geode.cache.client.internal.AbstractOp.processResponse(AbstractOp.java:224) at org.apache.geode.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:197) at org.apache.geode.cache.client.internal.AbstractOp.attempt(AbstractOp.java:384) at org.apache.geode.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:284) at org.apache.geode.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:355) at org.apache.geode.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:756) at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:142) at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:112) at org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:797) at org.apache.geode.cache.client.internal.QueryOp.execute(QueryOp.java:59) at org.apache.geode.cache.client.internal.ServerProxy.query(ServerProxy.java:59) at org.apache.geode.cache.query.internal.DefaultQuery.executeOnServer(DefaultQuery.java:327) at org.apache.geode.cache.query.internal.DefaultQuery.execute(DefaultQuery.java:215) at org.apache.geode.cache.query.internal.DefaultQuery.execute(DefaultQuery.java:197) Caused by: org.apache.geode.cache.query.QueryInvocationTargetException: The Region on which query is executed may have been destroyed.BucketRegion[path='/__PR/_B__trade_0;serial=12;primary=false] at org.apache.geode.internal.cache.PRQueryProcessor.executeQueryOnBuckets(PRQueryProcessor.java:264) at org.apache.geode.internal.cache.PRQueryProcessor.executeSequentially(PRQueryProcessor.java:214) at org.apache.geode.internal.cache.PRQueryProcessor.executeQuery(PRQueryProcessor.java:124) at org.apache.geode.internal.cache.partitioned.QueryMessage.operateOnPartitionedRegion(QueryMessage.java:210) Caused by: org.apache.geode.cache.RegionDestroyedException: BucketRegion[path='/__PR/_B__trade_0;serial=12;primary=false] at org.apache.geode.internal.cache.LocalRegion.checkRegionDestroyed(LocalRegion.java:7352) at org.apache.geode.internal.cache.LocalRegion.checkReadiness(LocalRegion.java:2757) at org.apache.geode.internal.cache.BucketRegion.checkReadiness(BucketRegion.java:1437) at org.apache.geode.internal.cache.LocalRegion.size(LocalRegion.java:8313) at org.apache.geode.cache.query.internal.index.CompactRangeIndex.getSizeEstimate(CompactRangeIndex.java:331) at org.apache.geode.cache.query.internal.CompiledComparison.getSizeEstimate(CompiledComparison.java:337) at org.apache.geode.cache.query.internal.GroupJunction.organizeOperands(GroupJunction.java:146) at org.apache.geode.cache.query.internal.AbstractGroupOrRangeJunction.filterEvaluate(AbstractGroupOrRangeJunction.java:148) at org.apache.geode.cache.query.internal.CompiledJunction.filterEvaluate(CompiledJunction.java:190) at org.apache.geode.cache.query.internal.CompiledSelect.evaluate(CompiledSelect.java:538) at org.apache.geode.cache.query.internal.CompiledSelect.evaluate(CompiledSelect.java:53) at org.apache.geode.cache.query.internal.DefaultQuery.executeUsingContext(DefaultQuery.java:357) at org.apache.geode.internal.cache.PRQueryProcessor.executeQueryOnBuckets(PRQueryProcessor.java:248)
Here is an example query that fails:
SELECT * FROM /trade WHERE arrangementId = 'aId_1' AND tradeStatus.toString() != 'CLOSED'
Here is a test that reproduces it:
- start one server with region configured as PARTITION with:
- 2 buckets
- PartitionResolver that puts the first entry in bucket 0, every other entry in bucket 1
- load N entries
- the index in bucket 0 becomes the arbitraryBucketIndex
- start a second server
- rebalance
- bucket 0 moves from the first server to the second server
- run the query