Details
-
Bug
-
Status: Resolved
-
Normal
-
Resolution: Fixed
-
None
-
None
-
Normal
Description
I'm watching a repair on a 7 node cluster. Repair was sent to one node; the node had 18G of data. No other node has more than 28G. The node where the repair initiated is now up to 261G with 53/60 AES tasks outstanding.
I have seen repair take more space than expected on 0.6 but nothing this extreme.
Other nodes in the cluster are occasionally logging
WARN [ScheduledTasks:1] 2010-10-28 08:31:14,305 MessagingService.java (line 515) Dropped 7 messages in the last 1000ms
The cluster is quiesced except for the repair. Not sure if the dropped messages are contributing the the disk space (b/c of retries?).