Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-3136

Assign multiple tasks per TaskTracker heartbeat

    Details

    • Type: Improvement
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.20.0
    • Component/s: None
    • Labels:
      None

      Description

      In today's logic of finding a new task, we assign only one task per heartbeat.

      We probably could give the tasktracker multiple tasks subject to the max number of free slots it has - for maps we could assign it data local tasks. We could probably run some logic to decide what to give it if we run out of data local tasks (e.g., tasks from overloaded racks, tasks that have least locality, etc.). In addition to maps, if it has reduce slots free, we could give it reduce task(s) as well. Again for reduces we could probably run some logic to give more tasks to nodes that are closer to nodes running most maps (assuming data generated is proportional to the number of maps). For e.g., if rack1 has 70% of the input splits, and we know that most maps are data/rack local, we try to schedule ~70% of the reducers there.

      Thoughts?

        Attachments

        1. HADOOP-3136_0_20080805.patch
          8 kB
          Arun C Murthy
        2. HADOOP-3136_1_20080809.patch
          7 kB
          Arun C Murthy
        3. HADOOP-3136_2_20080911.patch
          11 kB
          Arun C Murthy
        4. HADOOP-3136_3_20081211.patch
          24 kB
          Arun C Murthy
        5. HADOOP-3136_4_20081212.patch
          41 kB
          Arun C Murthy
        6. HADOOP-3136_5_20081215.patch
          45 kB
          Arun C Murthy

          Activity

            People

            • Assignee:
              acmurthy Arun C Murthy
              Reporter:
              devaraj Devaraj Das
            • Votes:
              0 Vote for this issue
              Watchers:
              19 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: