Hadoop Map/Reduce
  1. Hadoop Map/Reduce
  2. MAPREDUCE-554

Improve limit handling in fairshare scheduler

    Details

    • Type: Improvement Improvement
    • Status: Open
    • Priority: Minor Minor
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Description

      The fairshare scheduler has a way by which it can limit the number of jobs in a pool by setting the maxRunningJobs parameter in its allocations definition. This limit is treated as a hard limit, and comes into effect even if the cluster is free to run more jobs, resulting in underutilization. Possibly the same thing happens with the parameter maxRunningJobs for user and userMaxJobsDefault. It may help to treat these as a soft limit and run additional jobs to keep the cluster fully utilized.

        Activity

          People

          • Assignee:
            Unassigned
            Reporter:
            Hemanth Yamijala
          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:

              Development