Details

    • Type: New Feature New Feature
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Fix Version/s: 0.8 beta 1
    • Component/s: None
    • Labels:
      None

      Description

      Compaction is currently relatively bursty: we compact as fast as we can, and then we wait for the next compaction to be possible ("hurry up and wait").

      Instead, to properly amortize compaction, you'd like to compact exactly as fast as you need to to keep the sstable count under control.

      For every new level of compaction, you need to increase the rate that you compact at: a rule of thumb that we're testing on our clusters is to determine the maximum number of buckets a node can support (aka, if the 15th bucket holds 750 GB, we're not going to have more than 15 buckets), and then multiply the flush throughput by the number of buckets to get a minimum compaction throughput to maintain your sstable count.

      Full explanation: for a min compaction threshold of T, the bucket at level N can contain SsubN = T^N 'units' (unit == memtable's worth of data on disk). Every time a new unit is added, it has a 1/SsubN chance of causing the bucket at level N to fill. If the bucket at level N fills, it causes SsubN units to be compacted. So, for each active level in your system you have SubN * 1 / SsubN, or 1 amortized unit to compact any time a new unit is added.

        Issue Links

          Activity

            People

            • Assignee:
              Stu Hood
              Reporter:
              Stu Hood
              Reviewer:
              Sylvain Lebresne
            • Votes:
              1 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development