Cassandra
  1. Cassandra
  2. CASSANDRA-4153

Optimize truncate when snapshots are disabled or keyspace not durable

    Details

    • Type: Improvement Improvement
    • Status: Resolved
    • Priority: Minor Minor
    • Resolution: Fixed
    • Fix Version/s: 1.1.1
    • Component/s: None
    • Labels:
      None

      Description

      My goal is to make truncate to be less IO intensive so that my junit tests run faster (as already explained in CASSANDRA-3710). I think I have now a solution which does not change too much:

      I created a patch that optimizes three things within truncate:

      • Skip the whole Commitlog.forceNewSegment/discardCompletedSegments, if durable_writes are disabled for the keyspace.
      • With CASSANDRA-3710 implemented, truncate does not need to flush memtables to disk when snapshots are disabled.
      • Reduce the sleep interval

      The patch works nicely for me. Applying it and disabling durable_writes/autoSnapshot increased the speed of my testsuite vastly. I hope I did not overlook something.

      Let me know if my patch needs cleanup. I'd be glad to change it, if it means the patch will get accepted.

      1. OptimizeTruncate_v1.diff
        3 kB
        Christian Spriegel

        Issue Links

          Activity

          Hide
          Christian Spriegel added a comment -

          Added patch

          Show
          Christian Spriegel added a comment - Added patch
          Hide
          Jonathan Ellis added a comment -

          truncate does not need to flush memtables to disk when snapshots are disabled

          It still needs to clear out the memtables somehow though, or truncate won't actually discard all the data it's expected to.

          Show
          Jonathan Ellis added a comment - truncate does not need to flush memtables to disk when snapshots are disabled It still needs to clear out the memtables somehow though, or truncate won't actually discard all the data it's expected to.
          Hide
          Christian Spriegel added a comment - - edited

          Yes, you are right. That is why I call renewMemtable() instead. It drops the old memtable and creates a new one:

                  if (DatabaseDescriptor.isAutoSnapshot())
                  {
                      forceBlockingFlush(); // this was the old flush
                  }
                  else
                  {
                      Table.switchLock.writeLock().lock();
                      try
                      {
                          for (ColumnFamilyStore cfs : concatWithIndexes())
                          {
                              Memtable mt = cfs.getMemtableThreadSafe();
                              if (!mt.isClean() && !mt.isFrozen())
                              {
                                  mt.cfs.data.renewMemtable(); // just drop the memtable
                              }
                          }
                      }
                      finally
                      {
                          Table.switchLock.writeLock().unlock();
                      }
                  }
          

          This code is for flushing the memtable that shall be truncated only.

          Unfortunetaly that is not all. In order to be able to delete the commitlog, truncate does also flush all other memtables (Which probably has the worst impact on my testperformance). These flushes however become obsolete if the CF does not use the commitlog (the keyspace that the CF is in, to be more precise):

                  KSMetaData ksm = Schema.instance.getKSMetaData(this.table.name);
                  if(ksm.durableWrites)
                  {
                      CommitLog.instance.forceNewSegment();
                      ReplayPosition position = CommitLog.instance.getContext();
                      // now flush everyone else.  re-flushing ourselves is not necessary, but harmless
                      for (ColumnFamilyStore cfs : ColumnFamilyStore.all())
                          cfs.forceFlush(); // these flushes are obsolete if durableWrites are off
                      waitForActiveFlushes();
                      // if everything was clean, flush won't have called discard
                      CommitLog.instance.discardCompletedSegments(metadata.cfId, position);
                  }
          

          btw: I ran my testsuite with the patched Cassandra and it did truncate properly. So the very basic stuff should work, but I am not that sure about side effects

          Whilst we're at it, I have other questions:

          1. Do you I need to call Table.switchLock.writeLock().lock() for renewMemtable()?
          2. Are you ok with my sleep change? I think waiting a full 100ms is not neccessary, we just want to ensure that currentTimeInMillis() advances.
          Show
          Christian Spriegel added a comment - - edited Yes, you are right. That is why I call renewMemtable() instead. It drops the old memtable and creates a new one: if (DatabaseDescriptor.isAutoSnapshot()) { forceBlockingFlush(); // this was the old flush } else { Table.switchLock.writeLock().lock(); try { for (ColumnFamilyStore cfs : concatWithIndexes()) { Memtable mt = cfs.getMemtableThreadSafe(); if (!mt.isClean() && !mt.isFrozen()) { mt.cfs.data.renewMemtable(); // just drop the memtable } } } finally { Table.switchLock.writeLock().unlock(); } } This code is for flushing the memtable that shall be truncated only. Unfortunetaly that is not all. In order to be able to delete the commitlog, truncate does also flush all other memtables (Which probably has the worst impact on my testperformance). These flushes however become obsolete if the CF does not use the commitlog (the keyspace that the CF is in, to be more precise): KSMetaData ksm = Schema.instance.getKSMetaData( this .table.name); if (ksm.durableWrites) { CommitLog.instance.forceNewSegment(); ReplayPosition position = CommitLog.instance.getContext(); // now flush everyone else . re-flushing ourselves is not necessary, but harmless for (ColumnFamilyStore cfs : ColumnFamilyStore.all()) cfs.forceFlush(); // these flushes are obsolete if durableWrites are off waitForActiveFlushes(); // if everything was clean, flush won't have called discard CommitLog.instance.discardCompletedSegments(metadata.cfId, position); } btw: I ran my testsuite with the patched Cassandra and it did truncate properly. So the very basic stuff should work, but I am not that sure about side effects Whilst we're at it, I have other questions: Do you I need to call Table.switchLock.writeLock().lock() for renewMemtable()? Are you ok with my sleep change? I think waiting a full 100ms is not neccessary, we just want to ensure that currentTimeInMillis() advances.
          Hide
          Jonathan Ellis added a comment -

          Looks good to me, committed.

          (We do want the lock: we're not concerned about writes-in-progress per se (either keeping them or discarding them is fine), but we definitely want to keep them consistent with their indexes, and taking out the writeLock here is the only way I can see to do that.)

          Show
          Jonathan Ellis added a comment - Looks good to me, committed. (We do want the lock: we're not concerned about writes-in-progress per se (either keeping them or discarding them is fine), but we definitely want to keep them consistent with their indexes, and taking out the writeLock here is the only way I can see to do that.)

            People

            • Assignee:
              Christian Spriegel
              Reporter:
              Christian Spriegel
              Reviewer:
              Jonathan Ellis
            • Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development