Details
-
Improvement
-
Status: Open
-
Low
-
Resolution: Unresolved
-
None
Description
eI've been working on a way to keep data consistent without scheduled/external/manual repair, because for large datasets repair is extremely expensive. The basic gist is to introduce a new kind of hint that keeps just the primary key of the mutation (indicating that PK needs repair) and is recorded on replicas instead of coordinators during write time. Then a periodic background task can issue read repairs to just the PKs that were mutated. The initial performance degradation of this approach is non trivial, but I believe that I can optimize it so that we are doing very little additional work (see below in the design doc for some proposed optimizations).
My extremely rough proof of concept (uses a local table instead of HintStorage, etc) so far is in a branch and has a rough design document. I'm working on getting benchmarks of the various optimizations, but I figured I should start this ticket before I got too deep into it.
I believe this approach is particularly good for high read rate clusters requiring consistent low latency, and for clusters that mutate a relatively small proportion of their data (since you never have to read the whole dataset, just what's being mutated). I view this as something that works with incremental repair to reduce work required because with this technique we could potentially flush repaired + unrepaired sstables directly from the memtable. I also see this as something that would be enabled or disabled per table since it is so use case specific (e.g. some tables don't need repair at all). I think this is somewhat of a hybrid approach based on incremental repair, ticklers (read all partitions @ ALL), mutation based repair (CASSANDRA-8911), and hinted handoff. There are lots of tradeoffs, but I think it's worth talking about.
If anyone has feedback on the idea, I'd love to chat about it. bdeggleston, aweisberg I chatted with you guys a bit about this at NGCC; if you have time I'd love to continue that conversation here.