Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
1.6.3, 2.0.2, 2.1.3, 2.2.3, 2.3.4, 2.4.4, 3.0.0
-
None
Description
One Spark job on our cluster hangs forever at BytesToBytesMap.safeLookup. After inspecting, the long array size is 536870912.
Currently in BytesToBytesMap.append, we only grow the internal array if the size of the array is less than its MAX_CAPACITY that is 536870912. So in above case, the array can not be grown up, and safeLookup can not find an empty slot forever.
But it is wrong because we use two array entries per key, so the array size is twice the capacity. We should compare the current capacity of the array, instead of its size.
Attachments
Issue Links
- links to