Uploaded image for project: 'Ignite'
  1. Ignite
  2. IGNITE-15364

The rebalancing can be broken if historical rebalancing is reassigned after the client node joined the cluster.

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 2.13
    • None
    • Fixed rebalance issue when historical rebalancing is reassigned after the client node joined the cluster.
    • Release Notes Required

    Description

      Looks like the following scenario can break data consistency after rebalancing:

      • start and activate the cluster of three server nodes
      • create a cache with two backups and fill initial data into it
      • stop one server node and upload additional data to the cache in order to trigger historical rebalance after the node returns to the cluster
      • restart the node. make sure that historical rebalancing is started from two other nodes.
      • before rebalancing is completed a new client node should be started and joined the cluster. this leads to clean up partition update counters on server nodes, i.e. GridDhtPartitionTopologyImpl#cntrMap. ( * )
      • historical rebalancing from one node fails.
      • in that case, rebalancing is reassigned and starting node tries to rebalance missed partitions from another node.
        unfortunately, update counters for historical rebalance cannot be properly calculated due to ( * )

      An additional issue that was found while debugging: RebalanceReassignExchangeTask is skipped under some circumstances

      GridCachePartitionExchangeManager.ExchangeWorker#body0
      	else if (lastAffChangedVer.after(exchId.topologyVersion())) {
      		// There is a new exchange which should trigger rebalancing.
      		// This reassignment request can be skipped.
      		if (log.isInfoEnabled()) {
      			log.info("Partitions reassignment request skipped due to affinity was already changed" +
      				" [reassignTopVer=" + exchId.topologyVersion() +
      				", lastAffChangedTopVer=" + lastAffChangedVer + ']');
      		}
      

      There could be cases when the current rebalance is not canceled on PME which updates only minor versions and then triggers RebalanceReassignExchangeTask due to missed partitions on the supplier. After that, RebalanceReassignExchangeTask is skipped, as the current minor version is higher than rebalance topology version, which leads to the situation when instances of missed partitions on demander remain in MOVING state until next PME that will trigger another rebalance.

      Attachments

        Issue Links

          Activity

            People

              slava.koptilin Vyacheslav Koptilin
              slava.koptilin Vyacheslav Koptilin
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Time Tracking

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0h
                  0h
                  Logged:
                  Time Spent - 1h 40m
                  1h 40m