-
Type:
Bug
-
Status: Open
-
Priority:
Low
-
Resolution: Unresolved
-
Fix Version/s: None
-
Component/s: Consistency/Repair
-
Labels:None
-
Severity:Low
-
Since Version:
Since CASSANDRA-5220 there is an issue with system_distributed.repair_history when using virtual nodes. Performing a standard "nodetool repair" will create a lot less entries than it should.
Example:
$ ccm create test_repair -n 3 --vnodes -v 3.0.17 ... cqlsh> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3}; cqlsh> CREATE TABLE test.test(key PRIMARY KEY); ... ccm node1 nodetool repair test ... cqlsh> SELECT keyspace_name, columnfamily_name, id, range_begin, range_end FROM system_distributed.repair_history ; keyspace_name | columnfamily_name | id | range_begin | range_end ---------------+-------------------+--------------------------------------+---------------------+--------------------- test | test | 12f27830-1e53-11e9-93a0-2122ff85bd0a | 6842951316968308632 | 6844625844103123572
In the above example the cluster is created with 256 tokens but the repair history only shows one entry.
The problem is that in CASSANDRA-5220 a single repair session can repair multiple token ranges but the insertion into the repair_history table is done with the same id for all of them.