Type: New Feature
Affects Version/s: None
Fix Version/s: 0.21.0
The current schedulers in Hadoop all examine a single job on every heartbeat when choosing which tasks to assign, choosing the job based on FIFO or fair sharing. There are inherent limitations to this approach. For example, if the job at the front of the queue is small (e.g. 10 maps, in a cluster of 100 nodes), then on average it will launch only one local map on the first 10 heartbeats while it is at the head of the queue. This leads to very poor locality for small jobs. Instead, we need a more "global" view of scheduling that can look at multiple jobs. To resolve the locality problem, we will use the following algorithm:
- If the job at the head of the queue has no node-local task to launch, skip it and look through other jobs.
- If a job has waited at least T1 seconds while being skipped, also allow it to launch rack-local tasks.
- If a job has waited at least T2 > T1 seconds, also allow it to launch off-rack tasks.
This algorithm improves locality while bounding the delay that any job experiences in launching a task.
It turns out that whether waiting is useful depends on how many tasks are left in the job - the probability of getting a heartbeat from a node with a local task - and on whether the job is CPU or IO bound. Thus there may be logic for removing the wait on the last few tasks in the job.
As a related issue, once we allow global scheduling, we can launch multiple tasks per heartbeat, as in
HADOOP-3136. The initial implementation of HADOOP-3136 adversely affected performance because it only launched multiple tasks from the same job, but with the wait rule above, we will only do this for jobs that are allowed to launch non-local tasks.