Currently for mapping cache atomic updates we can use two ways:
1) Use nodes reporting status OWNING for partition where we send the update.
2) Use only affinity nodes mapping if rebalance is finished.
Using the second way we may route update request only to affinity node, while there is also node which is still owner and can process read requests.
It can lead to reading null values for some key, while update for such key was successful a moment ago.
Problem with using topology mapping:
1) We send update request with key K to near node N
2) N performs mapping for K to nodes P, B1, B2, B3 (Primary and backups) and starts waiting for succesful update responses for all of these nodes.
3) N sends update request to P. During this time B3 change status to RENTING (Eviction).
4) P also performs mapping for K to backup nodes B1, B2.
5) All updates are succesful, but N is still waiting for response from B3. Update request will be not finished and hangs.
- relates to
IGNITE-6467 Ignite cache 6: new tests CacheExchangeMergeTest.testConcurrentStartServersAndClients() and testDelayExchangeMessages() have flaky junit assertion after cache get
- links to