Uploaded image for project: 'Cassandra'
  1. Cassandra
  2. CASSANDRA-10994

Move away from SEDA to TPC, stage 1

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Normal
    • Resolution: Unresolved
    • None
    • Legacy/Core
    • None

    Description

      To start off the transition, I propose a modest (if not underwhelming) set of changes for stage 1:

      1. Convert read and write request paths to be fully non-blocking, and execute them directly within Netty context, avoiding any thread handoff (CASSANDRA-10993)
      2. Implement our own in-process page cache to complement (1) (CASSANDRA-5863)

      (2) is necessary to enable serving reads for memory-resident rows without handing them off to another stage.

      However, read requests that cannot be served from the cache will have to be handed off to a new thread pool (replacing the old READ stage), that would execute individual ReadCommand s using blocking I/O.

      The extra thread pool here is unfortunate, but cannot be avoided, as we have to support filesystems that aren’t xfs.

      For stage 1, we are not going to partition data ownership yet - every worker thread will be able to serve requests for any token. We are also not going to introduce processor affinity, or alter our partition or memtable data structures.

      Memtable flushing, compaction, and repair will not be modified beyond necessary changes caused by CASSANDRA-5863.

      With (1) and (2) combined we expect to see noticeable improvements for at least CL.ONE reads that can be served from memory and RF=1 writes. That, and not introducing any noticeable performance regressions for other types of requests is the success criteria for stage 1.

      I should note that we could do more transition work in parallel - in particular have the team working on making other components non-blocking, but don’t want to go that way for the following reasons:

      • Cassandra is a solid production-ready database, and should remain so. Introducing too much change in big chunks would make it hard to maintain stability
      • there is an argument to be made regarding not having (some of) maintenance task share the event loop with read and write requests handling loops, as they don’t necessarily benefit from it (cc aweisberg, who has an expanded comment prepared on this). Once we are done with stage 1, we will evaluate whether or not we should do that
      • introducing change progressively would give projects built on Cassandra (Stratio lucene-based search, Tuplejump’s integration, and DSE) to catch up and make necessary changes as they are being introduced

      This ticket will serve as an umbrella issue for all the work necessary for stage 1.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              aleksey Aleksey Yeschenko
              Votes:
              5 Vote for this issue
              Watchers:
              41 Start watching this issue

              Dates

                Created:
                Updated: