Uploaded image for project: 'Cassandra'
  1. Cassandra
  2. CASSANDRA-6866

Read repair path of quorum reads makes cluster to timeout all requests under load



    • Improvement
    • Status: Resolved
    • Normal
    • Resolution: Duplicate
    • None
    • None
    • None


      Current implementation of read repair path for quorum reads is:
      1. request data from 1 or 2 endpoints; request digest from others.
      2. compare digests; throw DigestMismatchEx
      3. request data form all contacted replicas with CL.ALL
      4. prepare read repairs; send mutations
      5. wait for all mutations to ack
      6. retry read and prepare result.

      The main problem is in p. 3 ( still p. 5 is not good as well ). This is because any of endpoints can go down but are not known to be down yet while executing this.

      So, if you have a noticeable amount of read repair happening (shortly after rack of nodes started up for example), waiting on CL.ALL and acks of RR mutations of not-yet-known-to-be-down endpoints quickly occupy all client thread pools on all nodes, so cluster becomes unavailable.

      This also make (otherwise successful) reads timeout from time to time even under light load of the cluster, just because of temporary hiccups on net or GC on a single endpoint.

      I do not have a generic solution for this; I fixed it in a way, which is appropriate for us - using always speculative retry policy; patching it to make data requests only (no digests) and do read repair on data at once (not requesting them again). This way yet-not-known-to-be-down endpoints are just not responing to data requests, so further read repair path does not contact them at all.

      I attached my patch here for illustration.


        1. Read_Latency__2data___digest_vs_3_data__-_99_.png
          87 kB
          Oleg Anastasyev
        2. ReadRepairPathFixExample.txt
          6 kB
          Oleg Anastasyev

        Issue Links



              Unassigned Unassigned
              m0nstermind Oleg Anastasyev
              0 Vote for this issue
              5 Start watching this issue