Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-1105

CapacityScheduler: It should be possible to set queue hard-limit beyond it's actual capacity

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Blocker
    • Resolution: Fixed
    • Affects Version/s: 0.21.0
    • Fix Version/s: 0.21.0
    • Component/s: capacity-sched
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Release Note:
      Hide
      Replaced the existing max task limits variables "mapred.capacity-scheduler.queue.<queue-name>.max.map.slots" and "mapred.capacity-scheduler.queue.<queue-name>.max.reduce.slots" with "mapred.capacity-scheduler.queue.<queue-name>.maximum-capacity" .

      max task limit variables were used to throttle the queue, i.e, these were the hard limit and not allowing queue to grow further.
      maximum-capacity variable defines a limit beyond which a queue cannot use the capacity of the cluster. This provides a means to limit how much excess capacity a queue can use.
       
      maximum-capacity variable behavior is different from max task limit variables, as maximum-capacity is a percentage and it grows and shrinks in absolute terms based on total cluster capacity.Also same maximum-capacity percentage is applied to both map and reduce.
      Show
      Replaced the existing max task limits variables "mapred.capacity-scheduler.queue.<queue-name>.max.map.slots" and "mapred.capacity-scheduler.queue.<queue-name>.max.reduce.slots" with "mapred.capacity-scheduler.queue.<queue-name>.maximum-capacity" . max task limit variables were used to throttle the queue, i.e, these were the hard limit and not allowing queue to grow further. maximum-capacity variable defines a limit beyond which a queue cannot use the capacity of the cluster. This provides a means to limit how much excess capacity a queue can use.   maximum-capacity variable behavior is different from max task limit variables, as maximum-capacity is a percentage and it grows and shrinks in absolute terms based on total cluster capacity.Also same maximum-capacity percentage is applied to both map and reduce.

      Description

      Currently the CS caps a queue's capacity to it's actual capacity if a hard-limit is specified to be greater than it's actual capacity. We should allow the queue to go upto the hard-limit if specified.

      Also, I propose we change the hard-limit unit to be percentage rather than #slots.

        Attachments

        1. MAPRED-1105-21-1.patch
          33 kB
          rahul k singh
        2. MAPRED-1105-21-2.patch
          45 kB
          rahul k singh
        3. MAPRED-1105-21-3.patch
          47 kB
          rahul k singh
        4. MAPRED-1105-21-3.patch
          47 kB
          rahul k singh
        5. MAPREDUCE-1105_apache0202.txt
          36 kB
          Allen Wittenauer
        6. MAPREDUCE-1105-version20.patch.txt
          40 kB
          rahul k singh
        7. MAPREDUCE-1105-version20-2.patch
          44 kB
          rahul k singh
        8. MAPREDUCE-1105-yahoo-version20-3.patch
          46 kB
          rahul k singh
        9. MAPREDUCE-1105-yahoo-version20-4.patch
          46 kB
          rahul k singh
        10. MAPREDUCE-1105-yahoo-version20-5.patch
          46 kB
          Hemanth Yamijala

          Activity

            People

            • Assignee:
              rksingh rahul k singh
              Reporter:
              acmurthy Arun C Murthy
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: