Details
-
Task
-
Status: Resolved
-
Major
-
Resolution: Won't Fix
-
None
-
None
Description
Currently we have several issues:
1) vacuum doesn't have change set, this means it travers all data to find invisible entries; hanse it breaks read statistics and make all data set "hot" - we should travers data entries instead, and only those entries, which was updated (linked to newer versions), moreover, vacuum should travers only those data pages, which were updated after last successful vacuum (at least one entry on the data page was linked to a never one) - this can be easily done by just having a special bit at the data page, so - any update resets this bit, vacuum travers only data pages with zero value bit and sets it to 1 after processing.
2) vacuum travers over partitions instead of data entries, so, there possible some races like: reader checks an entry; updater removes this entry from partition; vacuum doesn't see the entry and clean TxLog -> reader cannot check the entry state with TxLog and gets an exception. This race prevents an optimization when all entries, older than last successful vacuum version, are considered as COMMITTED (see previous suggestion)
We need to implement a special structure like visibility maps in PG to reduce examined pages amount, iterate over updated data pages only and do not use cache data tree.
Attachments
Issue Links
- is duplicated by
-
IGNITE-7998 SQL: Improve MVCC vacuum performance by iterating over data pages instead of cache tree.
- Resolved
- is related to
-
IGNITE-9592 MVCC: Use linked lists to store multiple versions.
- Closed