Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-4191

capacity scheduler: job unexpectedly exceeds queue capacity limit by one task

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 0.23.3
    • Fix Version/s: None
    • Component/s: mrv2, scheduler
    • Labels:
      None

      Description

      While testing the queue capacity limits, it appears that the job can exceed the
      queue capacity limit by one task while the user limit factor is 1. It's not
      clear to me why this is.

      Here is the steps to reproduce:

      1) set yarn.app.mapreduce.am.resource.mb to 2048 (default value)
      2) set yarn.scheduler.capacity.root.default.user-limit-factor to 1.0 (default)
      3) set yarn.scheduler.capacity.root.default.capacity to 90 (%)
      4) For a cluster with capacity of 56G, 90% rounded up is 51.
      5) submit a job with large number of tasks, each task using 1G memory.
      6) webui shows that the used resource is 52 G, which is 92.9% of the cluster
      capacity (instead of the expected 90%), and 103.2% of the queue capacity
      (instead of the expected 100%).

        Attachments

          Activity

            People

            • Assignee:
              tgraves Thomas Graves
              Reporter:
              tgraves Thomas Graves
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated: