Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
-
Reviewed
-
Description
We have seen instances when a user submitted a job with many thousands of mappers. The JobTracker was running with 3GB heap, but it was still not enough to prevent memory trashing from Garbage collection; effectively the Job Tracker was not able to serve jobs and had to be restarted.
One simple proposal would be to limit the maximum number of tasks per job. This can be a configurable parameter. Is there other things that eat huge globs of memory in job Tracker?
Attachments
Attachments
Issue Links
- duplicates
-
HADOOP-3925 Configuration paramater to set the maximum number of mappers/reducers for a job
- Closed
- is blocked by
-
HADOOP-4261 Jobs failing in the init stage will never cleanup
- Closed