Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
2.7.1
-
None
-
None
-
Reviewed
Description
We saw a MR deadlock recently:
- When NM restarted by framework without enable recovery, containers running on these nodes will be identified as "ABORTED", and MR AM will try to reschedule "ABORTED" mapper containers.
- Since such lost mappers are "ABORTED" container, MR AM gives normal mapper priority (priority=20) to such mapper requests. If there's any pending reducer (priority=10) at the same time, mapper requests need to wait for reducer requests satisfied.
- In our test, one mapper needs 700+ MB, reducer needs 1000+ MB, and RM available resource = mapper-request = (700+ MB), only one job was running in the system so scheduler cannot allocate more reducer containers AND MR-AM thinks there're enough headroom for mapper so reducer containers will not be preempted.
MAPREDUCE-6302 can solve most of the problems, but in the other hand, I think we may need to exclude scheduled reducers resource when calculating #available-mapper-slots from headroom. Which we can avoid excessive reducer preemption.
Attachments
Attachments
Issue Links
- relates to
-
MAPREDUCE-6513 MR job got hanged forever when one NM unstable for some time
- Closed