Details
-
Bug
-
Status: Resolved
-
Normal
-
Resolution: Duplicate
-
None
-
None
-
None
-
Normal
Description
I see this:
- Validation compaction runs on nodes 2,3,4 for CF_A only (expected)
- Node 3 streams SSTables from CF_A only to nodes 2 and 4 (expected)
- Nodes 2 and 4 stream SSTables from ALL column families in the keyspace to node 2 (VERY unexpected)
This is a quote from:
only difference is that this description seems to be a rf=2 cluster and ours is rf=3
Seems that AES.performStreamingRepair just sends a StreamRequestMessage without CF info and the peer nodes will simply send all data from every CF in that table they have for that range. But I must be missing something since that doesn't make any sense at all.
Fact is that after minor compactions the node on which the repair was triggered basically contained everything twice.
Good news is that while our 0.6 cluster would never have survived this it almost didn't affect read latencies. That whole page cache optimization thing really seems to work. Very very cool!