Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Duplicate
-
None
-
None
-
None
Description
When I run a spark job contains quite a lot of tasks(in my case is 200k[maptasks]*200k[reducetasks]), the driver occured OOM mainly caused by the object MapStatus,
RoaringBitmap that used to mark which block is empty seems to use too many memories.
I try to use org.apache.spark.util.collection.BitSet instead of RoaringBitMap, and it can save about 20% memories.
For the 200K tasks job,
RoaringBitMap uses 3 Long[1024] and 1 Short[3392] =3*64*1024+16*3392=250880(bit)
BitSet uses 1 Long[3125] = 3125*64=200000(bit)
Memory saved = (250880-200000) / 250880 ≈20%
Attachments
Issue Links
- is duplicated by
-
SPARK-11583 Make MapStatus use less memory uage
- Resolved
- links to