Uploaded image for project: 'Cassandra'
  1. Cassandra
  2. CASSANDRA-2013

Add CL.TWO, CL.THREE; tweak CL documentation

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Low
    • Resolution: Fixed
    • 0.7.4
    • None
    • None

    Description

      Attaching draft patch to add CL.TWO and CL.THREE.

      Motivation for adding is that having to select between either ONE or QUORUM is too narrow a choice for clusters with RF > 3. In such a case, it makes particular sense to want to do writes at e.g. CL.TWO for durability purposes even though you are not looking to get strong consistency with QUORUM. CL.THREE is the same argument. TWO and THREE felt reasonable; there is no objective reason why stopping at THREE is the obvious choice.

      Technically one would want to specify an arbitrary number, but that is a much more significant change.

      Two open questions:

      (1) I adjusted the documentation of ConsistencyLevel to be more consistent and also to reflect what I believe to be reality (for example, as far as I can tell QUORUM doesn't send requests to all nodes as claimed in the .thrift file). I'm not terribly confident that I have not missed something though.

      (2) There is at least one unresolved issue, which is this assertion check WriteResponseHandler:

      assert 1 <= blockFor && blockFor <= 2 * Table.open(table).getReplicationStrategy().getReplicationFactor()
      : String.format("invalid response count %d for replication factor %d",
      blockFor, Table.open(table).getReplicationStrategy().getReplicationFactor());

      At THREE, this causes an assertion failure on keyspace with RF=1. I would, as a user, expect UnavailableException. However I am uncertain as to what to do about this assertion. I think this highlights one TWO/THREE are different from previously existing CL:s, in that they essentially hard-code replicate counts rather than expressing them in terms that can by definition be served by the cluster at any RF.

      Given that with THREE (and not TWO, but that is only due to the implementation detail that bootstrapping is involved) implies a replicate count that is independent of the replication factor, there is essentially a new failure mode. It is suddenly possible for a consistency level to be fundamentally incompatible with the RF. My gut reaction is to want UnavailableException still, and that the assertion check can essentially be removed (other than the <= 1 part).

      If a different failure mode is desired, presumably it would not be an assertion failure (which should indicate a Cassandra bug). Maybe UnstisfiableConsistencyLevel? I propose just adjusting the assertion (which has no equivalent in ReadCallback btw); giving a friendlier error message in case of a CL/RF mismatch would be good, but doesn't feel worth introducing extra complexity to deal with it.

      'ant test' passes. I have tested w/ py_stress with a three-node cluster and an RF=3 keyspace and with 1 and 2 nodes down, and get expected behavior (available or unavailable as a function of nodes that are up).

      Attachments

        1. 2013-assert.txt
          2 kB
          Jonathan Ellis
        2. 2013.txt
          11 kB
          Peter Schuller

        Activity

          People

            scode Peter Schuller
            scode Peter Schuller
            Peter Schuller
            T Jake Luciani
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: