Type: New Feature
Resolution: Not A Problem
Fix Version/s: None
At minimum, application requires consistency level of X, which must be fault tolerant CL. However, when there is no failure it would be advantageous to use stronger consistency Y (Y>X).
Application defines minimum (X) and maximum (Y) consistency levels. C* can apply adaptive consistency logic to use Y whenever possible and downgrade to X when failure occurs.
Implementation should not negatively impact performance. Therefore, state has to be maintained globally (not per request).
Consider a case where user wants to maximize their uptime and consistency. They designing a system using C* where transactions are read/written with LOCAL_QUORUM and distributed across 2 DCs. Occasional inconsistencies between DCs can be tolerated. R/W with LOCAL_QUORUM is satisfactory in most of the cases.
Application requires new transactions to be read back right after they were generated. Write and read could be done through different DCs (no stickiness). In some cases when user writes into DC1 and reads immediately from DC2, replication delay may cause problems. Transaction won't show up on read in DC2, user will retry and create duplicate transaction. Occasional duplicates are fine and the goal is to minimize number of dups.
Therefore, we want to perform writes with stronger consistency (EACH_QUORUM) whenever possible without compromising on availability. Using adaptive consistency they should be able to define:
Read CL = LOCAL_QUORUM
Write CL = ADAPTIVE (MIN:LOCAL_QUORUM, MAX:EACH_QUORUM)
Similar scenario can be described for Write CL = ADAPTIVE (MIN:QUORUM, MAX:ALL) case.
- This functionality can/should be implemented by user himself.
It will be hard for an average user to implement topology monitoring and state machine. Moreover, this is a pattern which repeats.
- Transparent downgrading violates the CL contract, and that contract considered be just about the most important element of Cassandra's runtime behavior.
Fully transparent downgrading without any contract is dangerous. However, would it be problem if we specify explicitly only two discrete CL levels - MIN_CL and MAX_CL?
- If you have split brain DCs (partitioned in CAP), you have to sacrifice either consistency or availability, and auto downgrading sacrifices the consistency in dangerous ways if the application isn't designed to handle it. And if the application is designed to handle it, then it should be able to handle it in normal circumstances, not just degraded/extraordinary ones.
Agreed. Application should be designed for MIN_CL. In that case, MAX_CL will not be causing much harm, only adding flexibility.
- It might be a better idea to loudly downgrade, instead of silently downgrading, meaning that the client code does an explicit retry with lower consistency on failure and takes some other kind of action to attempt to inform either users or operators of the problem. The silent part of the downgrading which could be dangerous.
There are certainly cases where user should be informed when consistency changes in order to perform custom action. For this purpose we could allow/require user to register callback function which will be triggered when consistency level changes. Best practices could be enforced by requiring callback.