Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
2.4.0
-
None
-
Reviewed
Description
Currently, counter limits "mapreduce.job.counters.*" handled by org.apache.hadoop.mapreduce.counters.Limits are initialized asymmetrically: on the client side, and on the AM, job.xml is ignored whereas it's taken into account in YarnChild.
It would be good to make the Limits job-configurable, such that max counters/groups is only increased when needed. With the current Limits implementation relying on static constants, it's going to be challenging for tools that submit jobs concurrently without resorting to class loading isolation.
The patch that I am uploading is not perfect but demonstrates the issue.
Attachments
Attachments
Issue Links
- breaks
-
MAPREDUCE-6288 mapred job -status fails with AccessControlException
- Resolved
- is cloned by
-
MAPREDUCE-6925 CLONE - Make Counter limits consistent across JobClient, MRAppMaster, and YarnChild
- Open
- is duplicated by
-
MAPREDUCE-5856 Counter limits always use defaults even if JobClient is given a different Configuration
- Resolved
-
MAPREDUCE-6129 Job failed due to counter out of limited in MRAppMaster
- Resolved
- is related to
-
MAPREDUCE-5149 If job has more counters Job History server is not able to show them.
- Open
-
MAPREDUCE-6286 A typo in HistoryViewer makes some code useless, which causes counter limits are not reset correctly.
- Resolved
-
MAPREDUCE-4443 MR AM and job history server should be resilient to jobs that exceed counter limits
- Patch Available
- relates to
-
MAPREDUCE-5856 Counter limits always use defaults even if JobClient is given a different Configuration
- Resolved
-
MAPREDUCE-6271 org.apache.hadoop.mapreduce.Cluster GetJob() display warn log
- Patch Available