Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
-
Reviewed
-
Description
Currently, when a high memory job is scheduled by the capacity scheduler, each task scheduled counts only once in the capacity of the queue, though it may actually be preventing other jobs from using spare slots on that node because of its higher memory requirements. In order to be fair, the capacity scheduler should proportionally (with respect to default memory) account high memory jobs as using a larger capacity of the queue.
Attachments
Attachments
Issue Links
- blocks
-
MAPREDUCE-517 The capacity-scheduler should assign multiple tasks per heartbeat
- Closed
-
MAPREDUCE-516 Fix the 'cluster drain' problem in the Capacity Scheduler wrt High RAM Jobs
- Closed
- incorporates
-
HADOOP-5934 testHighRamJobWithSpeculativeExecution needs some changes
- Resolved
- is blocked by
-
HADOOP-5932 MemoryMatcher logs 0 as freeMemOnTT even though there are free slots available on TaskTraker
- Resolved