Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
1.22.3, 1.8.22, 1.30.0
-
None
Description
It is now possible to write a MapRecord with a huge number of entries, going over the maximum limit, MapRecord.MAX_SIZE, (i.e. 536.870.911 entries). This issue stems from the fact that the number of entries is checked when writing a map leaf record [0], but not when writing a map branch record [1]. When more than MapRecord.MAX_SIZE entries are written in a branch record [2], the entrycCount overflows in the first bit of the level, essentially rendering the entire HAMT structure corrupt, since the root branch record will be stored now at level 1, instead of level 0, reporting an incorrect size as well (i.e. actual size - MapRecord.MAX_SIZE).
Since this is a hard limit of the segment store and going above this number would mean rewriting the internals of the HAMT structure currently in use, I propose the following mitigation:
- add a size check for the branch record to not allow going over the limit
- log a warning when the number of entries goes over 400.000.000
- log an error when the number of entries goes over 500.000.000 and do not allow any write operations on the node
- allow further writes only if oak.segmentNodeStore.allowWritesOnHugeMapRecord system property is present
[0] https://github.com/apache/jackrabbit-oak/blob/1.22/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/DefaultSegmentWriter.java#L284
[1] https://github.com/apache/jackrabbit-oak/blob/1.22/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/DefaultSegmentWriter.java#L291
[2] https://github.com/apache/jackrabbit-oak/blob/1.22/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/RecordWriters.java#L231