Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
-
-
Release Notes Required
Description
We have noticed that in some cases when we handle demand message in GridDhtPartitionSupplier.java it is possible for some reasons that
iter = grp.offheap().rebalanceIterator(demandMsg.partitions(), demandMsg.topologyVersion());
throw an exception. In that case, rebalance should switch to full, but the code has a bug and remainingParts has been filed after rebalance iterator has been created
for (int i = 0; i < histMap.size(); i++) { int p = histMap.partitionAt(i); remainingParts.add(p); }
That means that we lost partitions that meant to be rebalanced by historical rebalance.
The solution is to create an iterator after remainingParts filling.
Attachments
Issue Links
- links to