Description
There is a scheduled task that, periodically, does 'partition SafeTime sync' on each primary replicas living on the node. For each such a replica, we do the following:
- Take current time from theĀ node clock ('now')
- Wait till the Metastorage SafeTime reaches 'now'
- Make sure the replica is still primary
- Execute the partition SafeTime sync logic
Step 2 is implemented by installing a future to a PendingComparableValuesTracker representing the Metastorage SafeTime. If, for some reason, Metastorage SafeTime lags behind the node clock, a few (or many) futures might be installed at the same time for the same partition. When there are many partitions, this leads to huge number of futures, most of which are useless (just one [the most recent] of them makes sense for each partition). This increases the amount of garbage. If the node is already struggling to chew the load, this will finish the node off as it will increase the GC pressure drastically. The node will choke itself to OutOfMemory situation.
It is suggested to only execute steps 1-4 if previous future has already finished. We might lose one partition SafeTime update, but in a situation when the node is already struggling (as Metastorage SafeTime lags) this will probably not be noticed.
Update: this approach was critisized, another one is tried https://issues.apache.org/jira/browse/IGNITE-22759?focusedCommentId=17895965&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17895965
Attachments
Issue Links
- is related to
-
IGNITE-22331 Storage aimem throws "Failed to commit the transaction." on creation of 1000 tables
- Resolved
- links to