Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
Description
In environments with a lot of volatile content the CompactionMap can end up eating a lot of memory. From CompactionStrategyMBean#getCompactionMapStats:
[Estimated Weight: 317,5 MB, Records: 39500094, Segments: 36698], [Estimated Weight: 316,4 MB, Records: 39374593, Segments: 36660], [Estimated Weight: 315,4 MB, Records: 39253205, Segments: 36620], [Estimated Weight: 315,1 MB, Records: 39221882, Segments: 36614], [Estimated Weight: 314,9 MB, Records: 39195490, Segments: 36604], [Estimated Weight: 315,0 MB, Records: 39182753, Segments: 36602], [Estimated Weight: 360 B, Records: 0, Segments: 0],
This causes compaction to be skipped:
2015-03-30:30.03.2015 02:00:00.038 *INFO* [] [TarMK compaction thread [/foo/bar/crx-quickstart/repository/segmentstore], active since Mon Mar 30 02:00:00 CEST 2015, previous max duration 3854982ms] org.apache.jackrabbit.oak.plugins.segment.file.FileStore Not enough available memory 5,5 GB, needed 6,3 GB, last merge delta 1,3 GB, so skipping compaction for now
Attachments
Issue Links
- is blocked by
-
OAK-2662 SegmentOverflowException in HeavyWriteIT on Jenkins
- Closed
- is related to
-
OAK-2723 FileStore does not scale because of precomputed graph on TarReader
- Closed
-
OAK-2862 CompactionMap#compress() inefficient for large compaction maps
- Closed
-
OAK-2967 Merge OAK-2800, OAK-2801, OAK-2692, OAK-2713
- Closed
- relates to
-
OAK-2967 Merge OAK-2800, OAK-2801, OAK-2692, OAK-2713
- Closed