Details
-
Epic
-
Status: Triage Needed
-
Normal
-
Resolution: Unresolved
-
None
-
None
-
None
-
Apache Cassandra Unified Repair Solution
-
All
-
None
Description
Motivation
Anti-entropy (Apache Cassandra repairs) is essential for every Apache Cassandra cluster to fix data inconsistencies. Frequent data deletions and downed nodes are common causes of data inconsistency. A few open-source orchestration solutions that trigger repair externally are available, as many large users have needed to figure out a scalable repair solution. However, multiple custom solutions have led to a lot of confusion in the community. Therefore, the repair activity, like Compaction, should be an integral part of Cassandra to call it a complete solution.
The proposal is to align one solution among the existing solutions and make it part of the core Cassandra. Here is the design for one of the solutions:
Inside Cassandra, there are multiple repairs we would have to schedule:
1) Full repair
2) Incremental Repair
3) Paxos repair
The design of the scheduler should be capable of extending multiple repair categories with a minimal code change, and all repair types should progress automatically with minimal manual intervention.
Migrating[1 (and rollback) to/from incremental repair has been extremely challenging, especially in a large fleet. One of the design principles is to make it almost touchless from the operator’s point of view.
The Scheduler
Keeping the above motivation in mind, this design embarks on our journey to have the repair orchestration inside Cassandra itself, which will repair the entire ring.
A dedicated thread pool is assigned to the repair scheduler at a higher level. The repair scheduler inside Cassandra maintains a new replicated table under a distributed system_distributed keyspace. This table maintains the repair history for all the nodes, such as when it was repaired the last time, etc. The scheduler will pick the node(s) that run the repair first and continue orchestration to ensure Every table and all of their token ranges are repaired. The algorithm can also run repairs simultaneously on multiple nodes and splits the token range into subranges with the necessary retry to handle transient failures. Over the period, the automatic repair has become so reliable that it runs as soon as we start a Cassandra cluster, like Compaction, and does not require manual intervention.
Due to this fully automated repair scheduler inside Cassandra, there is no dependency on the control plane, significantly reducing our operational overhead.
Detailed Design Doc
PR (on 4.1.6) (Last active: Sep 2024)
Many folks currently are using 4.1.6 in production. Hence, the following PR on 4.1.6 will make it easier for everybody to review the code, test, etc. If the community decides to merge this CEP, then it will land on the trunk as opposed to 4.1.
https://github.com/apache/cassandra/pull/3367/
PR (on trunk) (Last active: Sep 2024)
https://github.com/apache/cassandra/pull/3598
PR (dtest) (Last active: Oct 2024)
https://github.com/apache/cassandra-dtest/pull/270
Discussion over Slack
Attachments
Issue Links
- is a parent of
-
CASSANDRA-20013 Suggestions from masokol (from ecchronos experience)
- Triage Needed
-
CASSANDRA-20035 Auto-delete snapshots at X% disk full
- Triage Needed
- is duplicated by
-
CASSANDRA-14346 Scheduled Repair in Cassandra
- Open
-
CASSANDRA-10070 Automatic repair scheduling
- Open
- mentioned in
-
Page Loading...