Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-1105

CapacityScheduler: It should be possible to set queue hard-limit beyond it's actual capacity

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Blocker
    • Resolution: Fixed
    • 0.21.0
    • 0.21.0
    • capacity-sched
    • None
    • Reviewed
    • Hide
      Replaced the existing max task limits variables "mapred.capacity-scheduler.queue.<queue-name>.max.map.slots" and "mapred.capacity-scheduler.queue.<queue-name>.max.reduce.slots" with "mapred.capacity-scheduler.queue.<queue-name>.maximum-capacity" .

      max task limit variables were used to throttle the queue, i.e, these were the hard limit and not allowing queue to grow further.
      maximum-capacity variable defines a limit beyond which a queue cannot use the capacity of the cluster. This provides a means to limit how much excess capacity a queue can use.
       
      maximum-capacity variable behavior is different from max task limit variables, as maximum-capacity is a percentage and it grows and shrinks in absolute terms based on total cluster capacity.Also same maximum-capacity percentage is applied to both map and reduce.
      Show
      Replaced the existing max task limits variables "mapred.capacity-scheduler.queue.<queue-name>.max.map.slots" and "mapred.capacity-scheduler.queue.<queue-name>.max.reduce.slots" with "mapred.capacity-scheduler.queue.<queue-name>.maximum-capacity" . max task limit variables were used to throttle the queue, i.e, these were the hard limit and not allowing queue to grow further. maximum-capacity variable defines a limit beyond which a queue cannot use the capacity of the cluster. This provides a means to limit how much excess capacity a queue can use.   maximum-capacity variable behavior is different from max task limit variables, as maximum-capacity is a percentage and it grows and shrinks in absolute terms based on total cluster capacity.Also same maximum-capacity percentage is applied to both map and reduce.

    Description

      Currently the CS caps a queue's capacity to it's actual capacity if a hard-limit is specified to be greater than it's actual capacity. We should allow the queue to go upto the hard-limit if specified.

      Also, I propose we change the hard-limit unit to be percentage rather than #slots.

      Attachments

        1. MAPREDUCE-1105_apache0202.txt
          36 kB
          Allen Wittenauer
        2. MAPREDUCE-1105-yahoo-version20-5.patch
          46 kB
          Hemanth Yamijala
        3. MAPRED-1105-21-3.patch
          47 kB
          rahul k singh
        4. MAPREDUCE-1105-yahoo-version20-4.patch
          46 kB
          rahul k singh
        5. MAPRED-1105-21-3.patch
          47 kB
          rahul k singh
        6. MAPREDUCE-1105-yahoo-version20-3.patch
          46 kB
          rahul k singh
        7. MAPRED-1105-21-2.patch
          45 kB
          rahul k singh
        8. MAPREDUCE-1105-version20-2.patch
          44 kB
          rahul k singh
        9. MAPRED-1105-21-1.patch
          33 kB
          rahul k singh
        10. MAPREDUCE-1105-version20.patch.txt
          40 kB
          rahul k singh

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            rksingh rahul k singh
            acmurthy Arun Murthy
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment