Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
Description
The SegmentStore cache size calculation ignores the size of the field Segment.string (a concurrent hash map). It looks like a regular segment in a memory mapped file has the size 1024, no matter how many strings are loaded in memory. This can lead to out of memory. There seems to be no way to limit (configure) the amount of memory used by strings. In one example, 100'000 segments are loaded in memory, and 5 GB are used for Strings in that map.
We need a way to configure the amount of memory used for that. This seems to be basically a cache. OAK-2688 does this, but it would be better to have one cache with a configurable size limit.
Attachments
Attachments
Issue Links
- is duplicated by
-
OAK-2688 Segment.readString optimization
- Closed
- is related to
-
OAK-3075 Compaction Estimation should type check binary properties
- Resolved
-
OAK-3889 SegmentMk StringCache memory leak
- Closed
-
OAK-3089 LIRS cache: zero size cache causes IllegalArgumentException
- Closed
-
OAK-3055 Improve segment cache in SegmentTracker
- Closed
- relates to
-
OAK-3109 OOME in tarkmk standby tests
- Closed