Uploaded image for project: 'Cassandra'
  1. Cassandra
  2. CASSANDRA-3432

Avoid large array allocation for compressed chunk offsets

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Low
    • Resolution: Fixed
    • Fix Version/s: 1.0.2
    • Component/s: None
    • Labels:

      Description

      For each compressed file we keep the chunk offsets in memory (a long[]). The size of this array is directly proportional to the sstable file and the chunk_length_kb used, but say for a 64GB sstable, we're talking ~8MB in memory by default.

      Without being absolutely huge, this probably makes the life of the GC harder than necessary for the same reasons than CASSANDRA-2466, and this ticket proposes the same solution, i.e. to break down those big array into smaller ones to ease fragmentation.

      Note that this is only a concern for size tiered compaction. But until leveled compaction is battle tested, the default and we know nobody uses size tiered anymore, it's probably worth making the optimization.

        Attachments

          Activity

            People

            • Assignee:
              slebresne Sylvain Lebresne
              Reporter:
              slebresne Sylvain Lebresne
              Authors:
              Sylvain Lebresne
              Reviewers:
              Pavel Yaskevich
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: