Cassandra
  1. Cassandra
  2. CASSANDRA-3532

Compaction cleanupIfNecessary costly when many files in data dir

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Fix Version/s: 1.0.6
    • Component/s: Core
    • Labels:
    • Environment:

      Solaris 10, 1.0.4 release candidate

      Description

      From what I can tell SSTableWriter.cleanupIfNecessary seems increasingly costly as the number of files in the data dir increases.
      It calls SSTable.componentsFor(descriptor, Descriptor.TempState.TEMP) which lists all files in the data dir to find matching components.

      Am I roughly correct that (cleanupCost = SSTable count * data dir size)?

      We had been doing write load testing with default compaction throttling (16MB/s) and LeveledCompaction.
      Unfortunately we haven't been keeping tabs on sstable counts and it grew out of control.

      On a system with 300,000 sstables here is an example of our compaction rate. Note that as you're probably aware cleanupIfNecessary is included in the timing:

      INFO [CompactionExecutor:48] 2011-11-25 22:25:30,353 CompactionTask.java (line 213) Compacted to [/data1/cassandra/data/MA_DDR/indexes_03-hc-5369-Data.db,]. 5,821,590 to 5,306,354 (~91% of original) bytes for 123 keys at 0.163755MB/s. Time: 30,903ms.

      Here's a slightly larger one:
      INFO [CompactionExecutor:43] 2011-11-25 22:23:28,956 CompactionTask.java (line 213) Compacted to [/data1/cassandra/data/MA_DDR/indexes_03-hc-5336-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5337-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5338-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5339-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5340-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5341-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5342-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5343-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5344-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5345-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5346-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5347-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5348-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5349-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5350-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5351-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5352-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5353-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5354-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5355-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5356-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5357-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5358-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5359-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5360-Data.db,/data1/cassandra/data/MA_DDR/indexes_03-hc-5361-Data.db,]. 140,706,512 to 137,990,868 (~98% of original) bytes for 2,181 keys at 0.338627MB/s. Time: 388,623ms.

      This is with compaction throttling set to 0 (Off).

      So I believe because of this it's going to take a very long time to recover from having so many small sstables.
      It might be notable that we're using Solaris 10, possibly listFiles() is faster on other platforms?

      Is it feasible to keep track of the temp files and just delete them rather than searching for them for each SSTable using SSTable.componentsFor()?

      Here's the stack trace for the CompactionExecutor:14 thread that appears to be occupying the majority of the cpu time on this node:

      Name: CompactionExecutor:14
      State: RUNNABLE
      Total blocked: 3 Total waited: 1,610,714

      Stack trace:
      java.io.UnixFileSystem.getBooleanAttributes0(Native Method)
      java.io.UnixFileSystem.getBooleanAttributes(Unknown Source)
      java.io.File.isDirectory(Unknown Source)
      org.apache.cassandra.io.sstable.SSTable$3.accept(SSTable.java:204)
      java.io.File.listFiles(Unknown Source)
      org.apache.cassandra.io.sstable.SSTable.componentsFor(SSTable.java:200)
      org.apache.cassandra.io.sstable.SSTableWriter.cleanupIfNecessary(SSTableWriter.java:289)
      org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:189)
      org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(LeveledCompactionTask.java:57)
      org.apache.cassandra.db.compaction.CompactionManager$1.call(CompactionManager.java:134)
      org.apache.cassandra.db.compaction.CompactionManager$1.call(CompactionManager.java:114)
      java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
      java.util.concurrent.FutureTask.run(Unknown Source)
      java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
      java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
      java.lang.Thread.run(Unknown Source)

      No matter where I click in the busy Compaction thread timeline in YourKit it's in Running state and showing this above trace, except for short periods of time where it's actually compacting

      Thanks,
      Eric

      1. 3532.txt
        1 kB
        Eric Parusel
      2. 3532-v2.txt
        10 kB
        Jonathan Ellis
      3. 3532-v3.txt
        15 kB
        Jonathan Ellis

        Activity

        Hide
        Jonathan Ellis added a comment -

        Looks like leveled compaction means that sstable creation can be part of the critical path now:

        .   /**
             * Discovers existing components for the descriptor. Slow: only intended for use outside the critical path.
             */
            static Set<Component> componentsFor(final Descriptor desc, final Descriptor.TempState matchState)
        

        Is it feasible to keep track of the temp files and just delete them rather than searching for them for each SSTable using SSTable.componentsFor()?

        Simplest would be to just check File.exists on the limited set of possible temp file names. Next simplest and slightly more performant would be to move the cleanup out of the finally blocks, and into a catch block: the cleanup is a no-op if everything went well.

        Show
        Jonathan Ellis added a comment - Looks like leveled compaction means that sstable creation can be part of the critical path now: . /** * Discovers existing components for the descriptor. Slow: only intended for use outside the critical path. */ static Set<Component> componentsFor(final Descriptor desc, final Descriptor.TempState matchState) Is it feasible to keep track of the temp files and just delete them rather than searching for them for each SSTable using SSTable.componentsFor()? Simplest would be to just check File.exists on the limited set of possible temp file names. Next simplest and slightly more performant would be to move the cleanup out of the finally blocks, and into a catch block: the cleanup is a no-op if everything went well.
        Hide
        Eric Parusel added a comment -

        Thanks.
        Do I need to try to delete all components, for that descriptor.asTemporary()? Or just specific ones?
        A Component of type BITMAP_INDEX requires an id, so I'm not sure how I'd find this out without listing the directory contents.

        Show
        Eric Parusel added a comment - Thanks. Do I need to try to delete all components, for that descriptor.asTemporary()? Or just specific ones? A Component of type BITMAP_INDEX requires an id, so I'm not sure how I'd find this out without listing the directory contents.
        Hide
        Jonathan Ellis added a comment -

        BITMAP_INDEX type was never completed (CASSANDRA-1472) so I'm fine with removing that code and doing whatever is simplest. We can get more sophisticated if/when we get back to working on 1472.

        Show
        Jonathan Ellis added a comment - BITMAP_INDEX type was never completed ( CASSANDRA-1472 ) so I'm fine with removing that code and doing whatever is simplest. We can get more sophisticated if/when we get back to working on 1472.
        Hide
        Eric Parusel added a comment -

        Here's a small patch, and a few notes:

        • I wasn't sure how to best document the interaction with BITMAP_INDEX, hope a TODO there is ok.
        • I created a copy of descriptor.asTemporary(true). Is descriptor always guaranteed to be temporary==true? It left me a little uneasy.
        • I used SSTable.delete (while checking ahead of time that the file exists), because I wasn't sure if ordering of deletes was important.
        • SSTableReader.open() is the other method that calls SSTable.componentsFor, I only mention this because it may be a suboptimal call as well.
        Show
        Eric Parusel added a comment - Here's a small patch, and a few notes: I wasn't sure how to best document the interaction with BITMAP_INDEX, hope a TODO there is ok. I created a copy of descriptor.asTemporary(true). Is descriptor always guaranteed to be temporary==true? It left me a little uneasy. I used SSTable.delete (while checking ahead of time that the file exists), because I wasn't sure if ordering of deletes was important. SSTableReader.open() is the other method that calls SSTable.componentsFor, I only mention this because it may be a suboptimal call as well.
        Hide
        Eric Parusel added a comment -

        I applied the patch in our test environment against the 1.0.4 revision, and compaction is humming along nicely now.
        On the (otherwise idle) node I'm watching that had 250000 data/ files, file count is decreasing by 1000-1900/minute.

        Show
        Eric Parusel added a comment - I applied the patch in our test environment against the 1.0.4 revision, and compaction is humming along nicely now. On the (otherwise idle) node I'm watching that had 250000 data/ files, file count is decreasing by 1000-1900/minute.
        Hide
        Jonathan Ellis added a comment -

        v2 attached, with the new approach moved into componentsFor, so that open() can take advantage of the improvement too. Also, componentsFor now respects the temporary-ness of the descriptor passed, so a separate TempState enum is unnecessary.

        Also renamed cleanupIfNecessary to abort, and moved to catch block as discussed above.

        Show
        Jonathan Ellis added a comment - v2 attached, with the new approach moved into componentsFor, so that open() can take advantage of the improvement too. Also, componentsFor now respects the temporary-ness of the descriptor passed, so a separate TempState enum is unnecessary. Also renamed cleanupIfNecessary to abort, and moved to catch block as discussed above.
        Hide
        Eric Parusel added a comment -

        Thank you Jonathan – I applied the patch and it works for me.
        Cheers

        Show
        Eric Parusel added a comment - Thank you Jonathan – I applied the patch and it works for me. Cheers
        Hide
        Sylvain Lebresne added a comment -
        • Wrapping exceptions into RuntimeException() blindly will confuse the catcher of UserInterruptedException in DTPE.logExceptionsAfterExecute. We could make make that catcher unwrap the full exception, but truth is I'm not a fan of wrapping exception needlessly. Maybe we could just add a silence method somewhere:
          public void silence(Exception e)
          {
              if (e instanceof RuntimeException)
                  throw (RuntimeException) e;
              else
                  throw new RuntimeException(e);
          }
          

          and use that (as we already do in WrappedRunnable actually).

        • We could remove the bitmap indexes type while were at it (it'd be one less type to check).
        Show
        Sylvain Lebresne added a comment - Wrapping exceptions into RuntimeException() blindly will confuse the catcher of UserInterruptedException in DTPE.logExceptionsAfterExecute. We could make make that catcher unwrap the full exception, but truth is I'm not a fan of wrapping exception needlessly. Maybe we could just add a silence method somewhere: public void silence(Exception e) { if (e instanceof RuntimeException) throw (RuntimeException) e; else throw new RuntimeException(e); } and use that (as we already do in WrappedRunnable actually). We could remove the bitmap indexes type while were at it (it'd be one less type to check).
        Hide
        Jonathan Ellis added a comment -

        added FBUtilities.unchecked(Exception) as suggested, and removed BITMAP component.

        Show
        Jonathan Ellis added a comment - added FBUtilities.unchecked(Exception) as suggested, and removed BITMAP component.
        Hide
        Sylvain Lebresne added a comment -

        +1

        Show
        Sylvain Lebresne added a comment - +1
        Hide
        Jonathan Ellis added a comment -

        committed

        Show
        Jonathan Ellis added a comment - committed
        Hide
        Eric Parusel added a comment -

        Patch 3532-v3.txt applied here, works for me. Thanks again!

        Show
        Eric Parusel added a comment - Patch 3532-v3.txt applied here, works for me. Thanks again!
        Hide
        Eric Parusel added a comment -

        Hmm, I might have spoken too soon. This could also be a separate bug however.

        The nodes in my cluster are using a lot of file descriptors, holding open tmp files. A few are using 50K+, nearing their limit (on Solaris, of 64K).

        Here's a small snippet of lsof:
        java 828 appdeployer *146u VREG 181,65540 0 333376 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776429-Data.db
        java 828 appdeployer *147u VREG 181,65540 0 332952 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776359-Data.db
        java 828 appdeployer *148u VREG 181,65540 0 333079 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776380-Index.db
        java 828 appdeployer *149u VREG 181,65540 0 333080 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776380-Data.db
        java 828 appdeployer *150u VREG 181,65540 0 333224 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776403-Index.db
        java 828 appdeployer *151u VREG 181,65540 0 333025 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776372-Data.db
        java 828 appdeployer *152u VREG 181,65540 0 333225 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776403-Data.db
        java 828 appdeployer *154u VREG 181,65540 0 333858 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776514-Index.db
        java 828 appdeployer *155u VREG 181,65540 0 333426 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776438-Data.db
        java 828 appdeployer *156u VREG 181,65540 0 333326 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776421-Data.db
        java 828 appdeployer *157u VREG 181,65540 0 333553 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776460-Data.db
        java 828 appdeployer *158u VREG 181,65540 0 333501 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776452-Index.db
        java 828 appdeployer *159u VREG 181,65540 0 333597 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776468-Index.db
        java 828 appdeployer *160u VREG 181,65540 0 333598 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776468-Data.db
        java 828 appdeployer *162u VREG 181,65540 0 333884 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776518-Data.db
        java 828 appdeployer *163u VREG 181,65540 0 333502 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776452-Data.db
        java 828 appdeployer *165u VREG 181,65540 0 333929 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776527-Index.db
        java 828 appdeployer *166u VREG 181,65540 0 333859 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776514-Data.db
        java 828 appdeployer *167u VREG 181,65540 0 333663 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776480-Data.db
        java 828 appdeployer *168u VREG 181,65540 0 333812 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776506-Index.db

        I spot checked a few and found they still exist on the filesystem too:
        rw-rr- 1 appdeployer appdeployer 0 Dec 12 07:16 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776506-Index.db

        Show
        Eric Parusel added a comment - Hmm, I might have spoken too soon. This could also be a separate bug however. The nodes in my cluster are using a lot of file descriptors, holding open tmp files. A few are using 50K+, nearing their limit (on Solaris, of 64K). Here's a small snippet of lsof: java 828 appdeployer *146u VREG 181,65540 0 333376 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776429-Data.db java 828 appdeployer *147u VREG 181,65540 0 332952 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776359-Data.db java 828 appdeployer *148u VREG 181,65540 0 333079 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776380-Index.db java 828 appdeployer *149u VREG 181,65540 0 333080 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776380-Data.db java 828 appdeployer *150u VREG 181,65540 0 333224 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776403-Index.db java 828 appdeployer *151u VREG 181,65540 0 333025 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776372-Data.db java 828 appdeployer *152u VREG 181,65540 0 333225 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776403-Data.db java 828 appdeployer *154u VREG 181,65540 0 333858 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776514-Index.db java 828 appdeployer *155u VREG 181,65540 0 333426 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776438-Data.db java 828 appdeployer *156u VREG 181,65540 0 333326 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776421-Data.db java 828 appdeployer *157u VREG 181,65540 0 333553 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776460-Data.db java 828 appdeployer *158u VREG 181,65540 0 333501 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776452-Index.db java 828 appdeployer *159u VREG 181,65540 0 333597 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776468-Index.db java 828 appdeployer *160u VREG 181,65540 0 333598 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776468-Data.db java 828 appdeployer *162u VREG 181,65540 0 333884 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776518-Data.db java 828 appdeployer *163u VREG 181,65540 0 333502 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776452-Data.db java 828 appdeployer *165u VREG 181,65540 0 333929 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776527-Index.db java 828 appdeployer *166u VREG 181,65540 0 333859 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776514-Data.db java 828 appdeployer *167u VREG 181,65540 0 333663 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776480-Data.db java 828 appdeployer *168u VREG 181,65540 0 333812 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776506-Index.db I spot checked a few and found they still exist on the filesystem too: rw-r r - 1 appdeployer appdeployer 0 Dec 12 07:16 /data1/cassandra/data/MA_DDR/messages_meta-tmp-hb-776506-Index.db
        Hide
        Jonathan Ellis added a comment -

        That does sound like a separate problem. What do you see when you grep the log for messages_meta-tmp-hb-776506?

        Show
        Jonathan Ellis added a comment - That does sound like a separate problem. What do you see when you grep the log for messages_meta-tmp-hb-776506?
        Hide
        Eric Parusel added a comment -

        Sure. Let me know if you'd prefer a separate ticket.

        I don't see anything in the logs matching "776506". Any suggestions as to which class(es) I could turn on DEBUG log level for (via JMX), if that would help troubleshoot?

        Show
        Eric Parusel added a comment - Sure. Let me know if you'd prefer a separate ticket. I don't see anything in the logs matching "776506". Any suggestions as to which class(es) I could turn on DEBUG log level for (via JMX), if that would help troubleshoot?
        Hide
        Jonathan Ellis added a comment -

        Yes, separate ticket.

        org.apache.cassandra.db.compaction, org.apache.cassandra.db.Memtable, org.apache.cassandra.db.DataTracker to start with

        Show
        Jonathan Ellis added a comment - Yes, separate ticket. org.apache.cassandra.db.compaction, org.apache.cassandra.db.Memtable, org.apache.cassandra.db.DataTracker to start with
        Hide
        Eric Parusel added a comment -

        Separate ticket created: CASSANDRA-3616

        Show
        Eric Parusel added a comment - Separate ticket created: CASSANDRA-3616

          People

          • Assignee:
            Eric Parusel
            Reporter:
            Eric Parusel
            Reviewer:
            Jonathan Ellis
          • Votes:
            1 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development