Details
-
Improvement
-
Status: Resolved
-
Normal
-
Resolution: Fixed
-
None
Description
I am evaluating cassandra for a use case with many tiny rows which would result in a node with 1-3TB of storage having billions of rows. Before loading that much data I am hitting GC issues and when looking at the heap dump I noticed that 70+% of the memory was used by IndexSummaries.
The two major issues seem to be:
1) that the positions are stored as an ArrayList<Long> which results in each position taking 24 bytes (class + flags + 8 byte long). This might make sense when the file is initially written but once it has been serialized it would be a lot more memory efficient to just have an long[] (really a int[] would be fine unless 2GB sstables are allowed).
2) The DecoratedKey for a byte[16] key takes 195 bytes – this is for the overhead of the ByteBuffer in the key and overhead in the token.
To somewhat "work around" the problem I have increased index_sample but will this many rows that didn't really help starts to have diminishing returns.
NOTE: This heap dump was from linux with a 64bit oracle vm.
Attachments
Issue Links
- is related to
-
CASSANDRA-5020 Time to switch back to byte[] internally?
- Resolved
-
CASSANDRA-4324 Implement Lucene FST in for key index
- Resolved