Since repaired data is by definition consistent and we know which sstables are repaired, we can optimize the read path by having a REPAIRED_QUORUM which breaks reads into two phases:
1) Read from one replica the result from the repaired sstables.
2) Read from a quorum only the un-repaired data.
For the node performing 1) we can pipeline the call so it's a single hop.
In the long run (assuming data is repaired regularly) we will end up with much closer to CL.ONE performance while maintaining consistency.
Some things to figure out:
- If repairs fail on some nodes we can have a situation where we don't have a consistent repaired state across the replicas.