Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
0.23.3, 2.0.0-alpha
-
None
Description
Recently an OutOfMemoryError caused one of our jobs to become a zombie (MAPREDUCE-4300). It was a rather large job with 78000+ map tasks and only 750MB of heap configured. I took a heap dump to see if there were any obvious memory leaks, and I could not find any, but yourkit and some digging found some potential memory optimizations that we could do.
In this particular case we could save about 20MB if SplitMetaInfoReader.readSplitMetaInfo only computed the JobSplitFile once instead of for each split. (a 2 line change)
I will look into some others and see if there are more savings I can come up with.