Details
Description
I submit a big job, which has 500 maps and 350 reduce, to a queue(fairscheduler) with 300 max cores. When the big mapreduce job is running 100% maps, the 300 reduces have occupied 300 max cores in the queue. And then, a map fails and retry, waiting for a core, while the 300 reduces are waiting for failed map to finish. So a deadlock occur. As a result, the job is blocked, and the later job in the queue cannot run because no available cores in the queue.
I think there is the similar issue for memory of a queue .
Attachments
Attachments
Issue Links
- breaks
-
MAPREDUCE-6514 Job hangs as ask is not updated after ramping down of all reducers
- Closed
-
MAPREDUCE-6689 MapReduce job can infinitely increase number of reducer resource requests
- Closed
- is related to
-
MAPREDUCE-6513 MR job got hanged forever when one NM unstable for some time
- Closed
-
YARN-3485 FairScheduler headroom calculation doesn't consider maxResources for Fifo and FairShare policies
- Closed
-
MAPREDUCE-5844 Add a configurable delay to reducer-preemption
- Closed
- relates to
-
YARN-1680 availableResources sent to applicationMaster in heartbeat should exclude blacklistedNodes free memory.
- Open
-
YARN-3446 FairScheduler headroom calculation should exclude nodes in the blacklist
- Resolved
-
MAPREDUCE-6501 Improve reducer preemption
- Open