Uploaded image for project: 'Cassandra'
  1. Cassandra
  2. CASSANDRA-8573

Lack of compaction tooling for LeveledCompactionStrategy



    • Bug
    • Status: Resolved
    • Normal
    • Resolution: Duplicate
    • None
    • None
    • Normal


      This is a highly frustration-driven ticket. Apologize for roughness in tone

      Background: I happen to have a partition key with lots of tombstones. Sadly, I happen to run LeveledCompactionStrategy (LCS). Yes, it's probably my mistake to have put them there but running into tombstone issues seem to be common for Cassandra, so I don't think this ticket can be discarded as simply user error. In fact, I believe this could happen to the best of us. And when it does, there should be a quick way of correcting this.

      Problem: How does one handle this? Well, for DTCS one could issue a compaction using `nodetool compact`, or one could use the forceUserDefinedCompaction MBean. Neither of these work for LCS (shall I also say DTCS?).

      Workaround: The only options AFAIK are

      1. to lower "gc_grace_seconds" and "wait it out" until the Cassandra node(s) has garbage collected the sstables. This can take days.
      2. possibly lower `tombstone_threshold` to something tiny, optionally lowering `tombstone_compaction_interval ` (for recent deletes). This has the implication that nodes might start garbage collecting a ton of unrelated stuff.
      3. variations of "delete some or all your sstables" and run a full repair. Takes ages.

      Proposed solution: Either

      • Make forceUserDefinedCompaction support LCS, or create a similar endpoint that does something similar.
      • make something similar to `nodetool compact` work with LCS.

      Additional comments: I read somewhere where someone proposed making LCS default compaction strategy. Before this ticket is fixed, I don't see that as an option.

      Let me know what you think (or close if not relevant).


        Issue Links



              Unassigned Unassigned
              ztyx Jens Rantil
              0 Vote for this issue
              3 Start watching this issue