Details
-
Improvement
-
Status: Resolved
-
Low
-
Resolution: Fixed
-
None
Description
For each compressed file we keep the chunk offsets in memory (a long[]). The size of this array is directly proportional to the sstable file and the chunk_length_kb used, but say for a 64GB sstable, we're talking ~8MB in memory by default.
Without being absolutely huge, this probably makes the life of the GC harder than necessary for the same reasons than CASSANDRA-2466, and this ticket proposes the same solution, i.e. to break down those big array into smaller ones to ease fragmentation.
Note that this is only a concern for size tiered compaction. But until leveled compaction is battle tested, the default and we know nobody uses size tiered anymore, it's probably worth making the optimization.