Details
-
Bug
-
Status: Resolved
-
Normal
-
Resolution: Duplicate
-
None
-
None
-
Cassandra v2.1.9 running on 6 node Amazon AWS, vnodes enabled.
-
Normal
Description
After running an incremental repair, nodetool status report unbalanced load among cluster.
$ nodetool status mykeyspace
==========================
Status | Address | Load | Tokens | Owns (effective) | Host ID | Rack |
---|---|---|---|---|---|---|
UN | 10.1.1.1 | 1.13 TB | 256 | 48.5% | a4477534-a5c6-4e3e-9108-17a69aebcfc0 | RAC1 |
UN | 10.1.1.2 | 2.58 TB | 256 | 50.5% | 1a7c3864-879f-48c5-8dde-bc00cf4b23e6 | RAC2 |
UN | 10.1.1.3 | 1.49 TB | 256 | 51.5% | 27df5b30-a5fc-44a5-9a2c-1cd65e1ba3f7 | RAC1 |
UN | 10.1.1.4 | 250.97 GB | 256 | 51.9% | 9898a278-2fe6-4da2-b6dc-392e5fda51e6 | RAC3 |
UN | 10.1.1.5 | 1.88 TB | 256 | 49.5% | 04aa9ce1-c1c3-4886-8d72-270b024b49b9 | RAC2 |
UN | 10.1.1.6 | 1.3 TB | 256 | 48.1% | 6d5d48e6-d188-4f88-808d-dcdbb39fdca5 | RAC3 |
It seems that only 10.1.1.4 reports correct "Load". There is no hints in the cluster and report remains the same after running "nodetool cleanup" on each node. "nodetool cfstats" shows number of keys are evenly distributed and Cassandra data physical disk on each node report about the same usage.
"nodetool status" report these inaccurate large storage load until we restart each node, after the restart, "Load" report match what we've seen from disk.
We did not see this behavior until upgrade to v2.1.9
Attachments
Attachments
Issue Links
- duplicates
-
CASSANDRA-10831 Fix the way we replace sstables after anticompaction
- Resolved