Cassandra
  1. Cassandra
  2. CASSANDRA-1316

Read repair does not always work correctly

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Fix Version/s: 0.6.4
    • Component/s: Core
    • Labels:
      None

      Description

      Read repair does not always work. At the least, we allow violation of the CL.ALL contract. To reproduce, create a three node cluster with RF=3, and json2sstable one of the attached json files on each node. This creates a row whose key is 'test' with 9 columns, but only 3 columns are on each machine. If you get_count this row in quick succession at CL.ALL, sometimes you will receive a count of 6, sometimes 9. After the ReadRepairManager has sent the repairs, you will always get 9, which is the desired behavior.

      I have another data set obtained in the wild which never fully repairs for some reason, but it's a bit large to attach (600ish columns per machine.) I'm still trying to figure out why RR isn't working on this set, but I always get different results when reading at any CL including ALL, no matter how long I wait or how many reads I do.

      1. 1316-RRM.txt
        7 kB
        Jonathan Ellis
      2. RRR-v2.txt
        3 kB
        Jonathan Ellis
      3. 001_correct_responsecount_in_RRR.txt
        1 kB
        Brandon Williams
      4. cassandra-3.json
        0.1 kB
        Brandon Williams
      5. cassandra-2.json
        0.1 kB
        Brandon Williams
      6. cassandra-1.json
        0.1 kB
        Brandon Williams

        Activity

        Hide
        Brandon Williams added a comment -

        I tried to bisect this issue and went as far back as 0.4.2 without finding a successful version. I will see if the data that never repairs can be scrubbed so I can attach it to this ticket for debugging.

        Show
        Brandon Williams added a comment - I tried to bisect this issue and went as far back as 0.4.2 without finding a successful version. I will see if the data that never repairs can be scrubbed so I can attach it to this ticket for debugging.
        Hide
        Brandon Williams added a comment -

        It looks like the key difference between the real data and the toy data that is attached is that the real data has the key in multiple sstables. If left this way, RR never fully works, but if I force a compaction then it succeeds.

        Show
        Brandon Williams added a comment - It looks like the key difference between the real data and the toy data that is attached is that the real data has the key in multiple sstables. If left this way, RR never fully works, but if I force a compaction then it succeeds.
        Hide
        Brandon Williams added a comment -

        Updated json files illustrate one possible scenario: nodes 1 and 2 have a column, and node 3 has the column tombstoned with the same timestamp. It looks like tombstones aren't taking precedence.

        Show
        Brandon Williams added a comment - Updated json files illustrate one possible scenario: nodes 1 and 2 have a column, and node 3 has the column tombstoned with the same timestamp. It looks like tombstones aren't taking precedence.
        Hide
        Brandon Williams added a comment -

        Patch to solve one problem: use derterminBlockFor to set the correct response count passed to RRR, so CL.ALL works.

        Show
        Brandon Williams added a comment - Patch to solve one problem: use derterminBlockFor to set the correct response count passed to RRR, so CL.ALL works.
        Hide
        Jonathan Ellis added a comment -

        v2 updates QRH arguments to use responseCount as well, even though it's ignored

        Show
        Jonathan Ellis added a comment - v2 updates QRH arguments to use responseCount as well, even though it's ignored
        Hide
        Brandon Williams added a comment -

        Committed RRRv2.

        Show
        Brandon Williams added a comment - Committed RRRv2.
        Hide
        Jonathan Ellis added a comment -

        patch that simplifies debugging by removing ReadRepairManager which is mostly 100-odd lines of obfuscation around MessagingService.sendOneWay. (backport from CASSANDRA-1077 which was applied to trunk two months ago)

        Show
        Jonathan Ellis added a comment - patch that simplifies debugging by removing ReadRepairManager which is mostly 100-odd lines of obfuscation around MessagingService.sendOneWay. (backport from CASSANDRA-1077 which was applied to trunk two months ago)
        Hide
        Jonathan Ellis added a comment -

        Brandon's first patch fixing reads at CL.ALL turns out to be the only bug. The rest is obscure-but-valid behavior when expired tombstones haven't been replicated across the cluster (i.e., the tombstones exist on some nodes, but not all). Let me give an example:

        say node A has columns x and y, where x is an expired tombstone with timestamp T1, and node B has live column x, at time T2 where T2 < T1.

        if you read at ALL you will see x from B and y from A. you will not see x from A – since it is expired, it is no longer relevant off-node. thus, the ALL read will send a repair of column x to A, since it was "missing."

        But next time you read from A the tombstone will supress the newly-written copy of x-from-B still, because its timestamp is higher. So the replicas won't converge.

        This is not a bug, because the design explicitly allows that behavior when tombstones expire before being propagated to all nodes; see http://wiki.apache.org/cassandra/DistributedDeletes. The best way to avoid this of course is to run repair frequently enough to ensure that tombstones are propagated within GCGraceSeconds of being written.

        But if you do find yourself in this situation, you have two options to get things to converge again:

        1) the simplest option is to simply perform a major compaction on each node, which will eliminate all expired tombstones.

        2) but if you want to propagate as many of the tombstones as possible first, increase your GCGraceSeconds setting everywhere (requires rolling restart), and perform a full repair as described in http://wiki.apache.org/cassandra/Operations. After the repair is complete you can put GCGraceSeconds back to what it was.

        Show
        Jonathan Ellis added a comment - Brandon's first patch fixing reads at CL.ALL turns out to be the only bug. The rest is obscure-but-valid behavior when expired tombstones haven't been replicated across the cluster (i.e., the tombstones exist on some nodes, but not all). Let me give an example: say node A has columns x and y, where x is an expired tombstone with timestamp T1, and node B has live column x, at time T2 where T2 < T1. if you read at ALL you will see x from B and y from A. you will not see x from A – since it is expired, it is no longer relevant off-node. thus, the ALL read will send a repair of column x to A, since it was "missing." But next time you read from A the tombstone will supress the newly-written copy of x-from-B still, because its timestamp is higher. So the replicas won't converge. This is not a bug, because the design explicitly allows that behavior when tombstones expire before being propagated to all nodes; see http://wiki.apache.org/cassandra/DistributedDeletes . The best way to avoid this of course is to run repair frequently enough to ensure that tombstones are propagated within GCGraceSeconds of being written. But if you do find yourself in this situation, you have two options to get things to converge again: 1) the simplest option is to simply perform a major compaction on each node, which will eliminate all expired tombstones. 2) but if you want to propagate as many of the tombstones as possible first, increase your GCGraceSeconds setting everywhere (requires rolling restart), and perform a full repair as described in http://wiki.apache.org/cassandra/Operations . After the repair is complete you can put GCGraceSeconds back to what it was.
        Hide
        Hudson added a comment -

        Integrated in Cassandra-0.7 #70 (See https://hudson.apache.org/hudson/job/Cassandra-0.7/70/)

        Show
        Hudson added a comment - Integrated in Cassandra-0.7 #70 (See https://hudson.apache.org/hudson/job/Cassandra-0.7/70/ )

          People

          • Assignee:
            Brandon Williams
            Reporter:
            Brandon Williams
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development