Details
-
Sub-task
-
Status: Closed
-
Major
-
Resolution: Fixed
-
2.0.0
-
None
-
None
-
Reviewed
Description
It is bit ugly now. For eg:
AbstractMemStore
public final static long FIXED_OVERHEAD = ClassSize.align( ClassSize.OBJECT + (4 * ClassSize.REFERENCE) + (2 * Bytes.SIZEOF_LONG)); public final static long DEEP_OVERHEAD = ClassSize.align(FIXED_OVERHEAD + (ClassSize.ATOMIC_LONG + ClassSize.TIMERANGE_TRACKER + ClassSize.CELL_SKIPLIST_SET + ClassSize.CONCURRENT_SKIPLISTMAP));
We include the heap overhead of Segment also here. It will be better the Segment contains its overhead part and the Memstore impl uses the heap size of all of its segments to calculate its size.
Also this
public long heapSize() { return getActive().getSize(); }
HeapSize to consider all segment's size not just active's. I am not able to see an override method in CompactingMemstore.
This jira tries to solve some of these.
When we create a Segment, we seems pass some initial heap size value to it. Why? The segment object internally has to know what is its heap size not like some one else dictate it.
More to add when doing this cleanup