Details
-
Bug
-
Status: Open
-
Urgent
-
Resolution: Unresolved
-
None
-
Linux, 4 CPU cores, 16Gb RAM, Cassandra process utilizes ~8Gb, of which ~4Gb is Java heap
-
Critical
Description
2.8Gb of the heap is taken by the index data, pending for flush (see the screenshot). As a result the node fails with OOM.
Questions:
- Why can't Cassandra keep up with the inserted data and flush it?
- What resources/configuration should be changed to improve the performance?
Attachments
Attachments
Issue Links
- is related to
-
CASSANDRA-16071 max_compaction_flush_memory_in_mb is interpreted as bytes
- Resolved