Hadoop Map/Reduce
  1. Hadoop Map/Reduce
  2. MAPREDUCE-1361

In the pools with minimum slots, new job will always receive slots even if the minimum slots limit has been fulfilled

    Details

    • Type: Improvement Improvement
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Won't Fix
    • Affects Version/s: 0.20.2
    • Fix Version/s: 0.20.3
    • Component/s: contrib/fair-share
    • Labels:
      None

      Description

      In 0.20, the fair scheduler compares all the jobs based on their running tasks, minimum slots and deficit. If the number of running tasks is less than the number of minimum slots, it will be scheduled first.

      Consider a pool with minimum slot of 1000 but already have 5000 running tasks.
      If we launch another job on this pool, this new job will receive minimum slots based on its weight. This new job may have higher weight if NewJobWeightBooster is used.
      So this new job will still get extra slots even if the pool's running tasks are way more than the minimum slots.

      The latest version does not have this problem because it first compares pool then compares jobs in the pool.

        Activity

        No work has yet been logged on this issue.

          People

          • Assignee:
            Scott Chen
            Reporter:
            Scott Chen
          • Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development