Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
Description
If the region locations changed and our HBase meta cache is not updated then we might not be sending hash join cache to all region servers hosting the regions.
ConnectionQueryServicesImpl#getAllTableRegions
boolean reload =false; while (true) { try { // We could surface the package projected HConnectionImplementation.getNumberOfCachedRegionLocations // to get the sizing info we need, but this would require a new class in the same package and a cast // to this implementation class, so it's probably not worth it. List<HRegionLocation> locations = Lists.newArrayList(); byte[] currentKey = HConstants.EMPTY_START_ROW; do { HRegionLocation regionLocation = connection.getRegionLocation( TableName.valueOf(tableName), currentKey, reload); locations.add(regionLocation); currentKey = regionLocation.getRegionInfo().getEndKey(); } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW)); return locations;
Skipping duplicate servers in ServerCacheClient#addServerCache
List<HRegionLocation> locations = services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes()); int nRegions = locations.size(); ..... if ( ! servers.contains(entry) && keyRanges.intersectRegion(regionStartKey, regionEndKey, cacheUsingTable.getIndexType() == IndexType.LOCAL)) { // Call RPC once per server servers.add(entry);
For eg:- Table âTâ has two regions R1 and R2 originally hosted on regionserver RS1.
while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 , but stale meta cache will still give old region locations i.e R1 and R2 on RS1 and when we start copying hash table, we copy for R1 and skip R2 as they are hosted on same regionserver. so, the query on a table will fail as it will unable to find hash table cache on RS2 for processing regions R2.
Attachments
Attachments
Issue Links
- causes
-
PHOENIX-4662 NullPointerException in TableResultIterator.java on cache resend
- Resolved
- links to