Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-4287

Capacity Scheduler: Rack Locality improvement

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.7.1
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: capacityscheduler
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      YARN-4189 does an excellent job describing the issues with the current delay scheduling algorithms within the capacity scheduler. The design proposal also seems like a good direction.

      This jira proposes a simple interim solution to the key issue we've been experiencing on a regular basis:

      • rackLocal assignments trickle out due to nodeLocalityDelay. This can have significant impact on things like CombineFileInputFormat which targets very specific nodes in its split calculations.

      I'm not sure when YARN-4189 will become reality so I thought a simple interim patch might make sense. The basic idea is simple:
      1) Separate delays for rackLocal, and OffSwitch (today there is only 1)
      2) When we're getting rackLocal assignments, subsequent rackLocal assignments should not be delayed

      Patch will be uploaded shortly. No big deal if the consensus is to go straight to YARN-4189.

      1. YARN-4287.patch
        30 kB
        Nathan Roberts
      2. YARN-4287-minimal.patch
        15 kB
        Nathan Roberts
      3. YARN-4287-minimal-v2.patch
        15 kB
        Nathan Roberts
      4. YARN-4287-minimal-v3.patch
        15 kB
        Nathan Roberts
      5. YARN-4287-minimal-v4.patch
        21 kB
        Nathan Roberts
      6. YARN-4287-minimal-v4-branch-2.7.patch
        20 kB
        Nathan Roberts
      7. YARN-4287-v2.patch
        37 kB
        Nathan Roberts
      8. YARN-4287-v3.patch
        37 kB
        Nathan Roberts
      9. YARN-4287-v4.patch
        46 kB
        Nathan Roberts

        Issue Links

          Activity

          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 20m 24s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 2 new or modified test files.
          +1 javac 10m 42s There were no new javac warning messages.
          +1 javadoc 15m 3s There were no new javadoc warning messages.
          +1 release audit 0m 48s The applied patch does not increase the total number of release audit warnings.
          -1 checkstyle 1m 16s The applied patch generated 13 new checkstyle issues (total was 257, now 268).
          -1 whitespace 0m 10s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 install 3m 13s mvn install still works.
          +1 eclipse:eclipse 0m 53s The patch built with eclipse:eclipse.
          +1 findbugs 2m 17s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
          -1 yarn tests 60m 42s Tests failed in hadoop-yarn-server-resourcemanager.
              115m 33s  



          Reason Tests
          Failed unit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue
            hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimits
            hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt
          Timed out tests org.apache.hadoop.yarn.server.resourcemanager.TestRM



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12767900/YARN-4287.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / d1cdce7
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9516/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/9516/artifact/patchprocess/whitespace.txt
          hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/9516/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9516/testReport/
          Java 1.7.0_55
          uname Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9516/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 20m 24s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 2 new or modified test files. +1 javac 10m 42s There were no new javac warning messages. +1 javadoc 15m 3s There were no new javadoc warning messages. +1 release audit 0m 48s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 1m 16s The applied patch generated 13 new checkstyle issues (total was 257, now 268). -1 whitespace 0m 10s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 install 3m 13s mvn install still works. +1 eclipse:eclipse 0m 53s The patch built with eclipse:eclipse. +1 findbugs 2m 17s The patch does not introduce any new Findbugs (version 3.0.0) warnings. -1 yarn tests 60m 42s Tests failed in hadoop-yarn-server-resourcemanager.     115m 33s   Reason Tests Failed unit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimits   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt Timed out tests org.apache.hadoop.yarn.server.resourcemanager.TestRM Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12767900/YARN-4287.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / d1cdce7 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9516/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/9516/artifact/patchprocess/whitespace.txt hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/9516/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9516/testReport/ Java 1.7.0_55 uname Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/9516/console This message was automatically generated.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Thanks Nathan Roberts, +1 to have an interim solution, the proposal looks good also, will review patch soon.

          Show
          leftnoteasy Wangda Tan added a comment - Thanks Nathan Roberts , +1 to have an interim solution, the proposal looks good also, will review patch soon.
          Hide
          nroberts Nathan Roberts added a comment -

          Fixed unit test failures and addressed most checkstyle errors

          Show
          nroberts Nathan Roberts added a comment - Fixed unit test failures and addressed most checkstyle errors
          Hide
          leftnoteasy Wangda Tan added a comment -

          Some suggestions
          1) RACK_LOCALITY_EXTRA_DELAY -> RACK_LOCALITY_DELAY, same as configuration property name (rack-locality-delay)

          2) Do you think if is it a good idea to separate old rack-locality-delay computation (using getLocalityWaitFactor) and new rack-locality-delay config? Now rack-locality-delay = min(old-computed-delay, new-specified-delay), since the getLocalityWaitFactor has some flaws, I think we can make this configurable so user can choose to use specified or computed.

          Pseudo code may look like:

          if type is OFF_SWITCH:
            if rack-locality-delay specified:
              delay = rack-locality-delay
            else:
              delay = computed-locality-delay
          else if type is RACK_LOCAL:
              delay = min(node-locality-delay, computed-or-specified-rack-locality-delay)
          

          3)

          When we're getting rackLocal assignments, subsequent rackLocal assignments should not be delayed

          +1 to the fix, since this is a behavior change, do you think if we need to make this configurable? This change could lead to #node-local container allocation decreasing in some cases.

          Thanks,
          Wangda

          Show
          leftnoteasy Wangda Tan added a comment - Some suggestions 1) RACK_LOCALITY_EXTRA_DELAY -> RACK_LOCALITY_DELAY, same as configuration property name (rack-locality-delay) 2) Do you think if is it a good idea to separate old rack-locality-delay computation (using getLocalityWaitFactor) and new rack-locality-delay config? Now rack-locality-delay = min(old-computed-delay, new-specified-delay), since the getLocalityWaitFactor has some flaws, I think we can make this configurable so user can choose to use specified or computed. Pseudo code may look like: if type is OFF_SWITCH: if rack-locality-delay specified: delay = rack-locality-delay else : delay = computed-locality-delay else if type is RACK_LOCAL: delay = min(node-locality-delay, computed-or-specified-rack-locality-delay) 3) When we're getting rackLocal assignments, subsequent rackLocal assignments should not be delayed +1 to the fix, since this is a behavior change, do you think if we need to make this configurable? This change could lead to #node-local container allocation decreasing in some cases. Thanks, Wangda
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 17m 9s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 4 new or modified test files.
          +1 javac 7m 59s There were no new javac warning messages.
          +1 javadoc 10m 26s There were no new javadoc warning messages.
          +1 release audit 0m 24s The applied patch does not increase the total number of release audit warnings.
          -1 checkstyle 0m 50s The applied patch generated 4 new checkstyle issues (total was 257, now 259).
          -1 whitespace 0m 12s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 install 1m 31s mvn install still works.
          +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse.
          +1 findbugs 1m 31s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
          +1 yarn tests 58m 19s Tests passed in hadoop-yarn-server-resourcemanager.
              99m 3s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12768138/YARN-4287-v2.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / 124a412
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9533/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/9533/artifact/patchprocess/whitespace.txt
          hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/9533/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9533/testReport/
          Java 1.7.0_55
          uname Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9533/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 17m 9s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 4 new or modified test files. +1 javac 7m 59s There were no new javac warning messages. +1 javadoc 10m 26s There were no new javadoc warning messages. +1 release audit 0m 24s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 0m 50s The applied patch generated 4 new checkstyle issues (total was 257, now 259). -1 whitespace 0m 12s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 install 1m 31s mvn install still works. +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse. +1 findbugs 1m 31s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 yarn tests 58m 19s Tests passed in hadoop-yarn-server-resourcemanager.     99m 3s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12768138/YARN-4287-v2.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 124a412 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9533/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/9533/artifact/patchprocess/whitespace.txt hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/9533/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9533/testReport/ Java 1.7.0_55 uname Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/9533/console This message was automatically generated.
          Hide
          nroberts Nathan Roberts added a comment -

          Thanks for the comments. You're right that the logic can be simplified in that area. Let me do that and post a followup patch.

          Show
          nroberts Nathan Roberts added a comment - Thanks for the comments. You're right that the logic can be simplified in that area. Let me do that and post a followup patch.
          Hide
          nroberts Nathan Roberts added a comment -

          V3 of patch. Thanks again for the comments.

          RACK_LOCALITY_EXTRA_DELAY -> RACK_LOCALITY_DELAY, same as configuration property name (rack-locality-delay)

          Done - changed to absolute instead of relative to nodeLocality

          Do you think if is it a good idea to separate old rack-locality-delay computation (using getLocalityWaitFactor) and new rack-locality-delay config? Now rack-locality-delay = min(old-computed-delay, new-specified-delay), since the getLocalityWaitFactor has some flaws, I think we can make this configurable so user can choose to use specified or computed.

          I simplified the code a little in this area to make it easier to see where the computed-locality-delay is used. I didn't separate them in this version of the patch because I still want to be able to specify rack-locality-delay BUT have the computed delay take effect when an application is not asking for locality OR is really small. This is a very important capability for at least our use cases.

          My opinion is that we shouldn't make it configurable to get the old behavior. I can be convinced otherwise, if that's what folks want. Here's my reasoning:

          • This is a behavior change, but I can't think of any good cases where someone would prefer the old behavior to the new. Let me know if you can think of some.
          • Node locality might go down a little bit but I think it's quite unlikely this will happen in practice. As soon as it sees a node-local assignment, it immediately goes back to waiting for node-locality - so it's quite hard to only get rack locality when there is node locality to be had.
          • Rack locality will go up because previously the computedDelay used for OFFSWITCH would actually kick-in prior to a rack-local opportunity, which wasn't ideal. I would think this would offset any node locality we lost.
          Show
          nroberts Nathan Roberts added a comment - V3 of patch. Thanks again for the comments. RACK_LOCALITY_EXTRA_DELAY -> RACK_LOCALITY_DELAY, same as configuration property name (rack-locality-delay) Done - changed to absolute instead of relative to nodeLocality Do you think if is it a good idea to separate old rack-locality-delay computation (using getLocalityWaitFactor) and new rack-locality-delay config? Now rack-locality-delay = min(old-computed-delay, new-specified-delay), since the getLocalityWaitFactor has some flaws, I think we can make this configurable so user can choose to use specified or computed. I simplified the code a little in this area to make it easier to see where the computed-locality-delay is used. I didn't separate them in this version of the patch because I still want to be able to specify rack-locality-delay BUT have the computed delay take effect when an application is not asking for locality OR is really small. This is a very important capability for at least our use cases. My opinion is that we shouldn't make it configurable to get the old behavior. I can be convinced otherwise, if that's what folks want. Here's my reasoning: This is a behavior change, but I can't think of any good cases where someone would prefer the old behavior to the new. Let me know if you can think of some. Node locality might go down a little bit but I think it's quite unlikely this will happen in practice. As soon as it sees a node-local assignment, it immediately goes back to waiting for node-locality - so it's quite hard to only get rack locality when there is node locality to be had. Rack locality will go up because previously the computedDelay used for OFFSWITCH would actually kick-in prior to a rack-local opportunity, which wasn't ideal. I would think this would offset any node locality we lost.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Nathan Roberts,
          Thanks for updating, some thinkings regarding to your comments:

          This is a behavior change, but I can't think of any good cases where someone would prefer the old behavior to the new. Let me know if you can think of some.

          Agree with you, most of your changes are good, I prefer to enable it to get better performance. But I can still think some edge cases, and I'd prefer to keep old one to avoid some magic things happen . Let me explain more:

          There're several behavior changes in your patch,
          1. rack-delay = min (computed-offswitch-delay, configured-rack-delay)
          When large configured-rack-delay specified, it uses old behavior. So this is safe to me. And I think what you mentioned before:

          I didn't separate them in this version of the patch because I still want to be able to specify rack-locality-delay BUT have the computed delay take effect when an application is not asking for locality OR is really small.

          Makes sense to me, I just feel current way to compute offswitch delay need to be improved, I will add an example below.

          2. node-delay = min(rack-delay, node-delay).
          If a cluster has 40 nodes, user requests 3 containers on node1:

          Assume the configured-rack-delay=50, 
          rack-delay = min(3 (#requested-container) * 1 (#requested-resource-name)  / 40, 50) = 0.
          So:
          node-delay = min(rack-delay, 40) = 0
          

          In above example, no matter how rack-delay specified/computed, if we can keep the node-delay to 40, we have better chance to get node-local containers allocated.

          3. Don't restore missed-opportunity if rack-local container allocated.
          The benefit of this change is obvious - we can get faster rack-local container allocation. But I feel this can also affect node-local container allocation (If the application asks only a small subset of nodes in a rack), may lead to some performance regression for locality I/O sensitive applications.

          Show
          leftnoteasy Wangda Tan added a comment - Nathan Roberts , Thanks for updating, some thinkings regarding to your comments: This is a behavior change, but I can't think of any good cases where someone would prefer the old behavior to the new. Let me know if you can think of some. Agree with you, most of your changes are good, I prefer to enable it to get better performance. But I can still think some edge cases, and I'd prefer to keep old one to avoid some magic things happen . Let me explain more: There're several behavior changes in your patch, 1. rack-delay = min (computed-offswitch-delay, configured-rack-delay) When large configured-rack-delay specified, it uses old behavior. So this is safe to me. And I think what you mentioned before: I didn't separate them in this version of the patch because I still want to be able to specify rack-locality-delay BUT have the computed delay take effect when an application is not asking for locality OR is really small. Makes sense to me, I just feel current way to compute offswitch delay need to be improved, I will add an example below. 2. node-delay = min(rack-delay, node-delay). If a cluster has 40 nodes, user requests 3 containers on node1: Assume the configured-rack-delay=50, rack-delay = min(3 (#requested-container) * 1 (#requested-resource-name) / 40, 50) = 0. So: node-delay = min(rack-delay, 40) = 0 In above example, no matter how rack-delay specified/computed, if we can keep the node-delay to 40, we have better chance to get node-local containers allocated. 3. Don't restore missed-opportunity if rack-local container allocated. The benefit of this change is obvious - we can get faster rack-local container allocation. But I feel this can also affect node-local container allocation (If the application asks only a small subset of nodes in a rack), may lead to some performance regression for locality I/O sensitive applications.
          Hide
          nroberts Nathan Roberts added a comment -

          Thanks Wangda Tan for the comments.

          2. node-delay = min(rack-delay, node-delay).
          If a cluster has 40 nodes, user requests 3 containers on node1:

          Assume the configured-rack-delay=50,
          rack-delay = min(3 (#requested-container) * 1 (#requested-resource-name) / 40, 50) = 0.
          So:
          node-delay = min(rack-delay, 40) = 0

          In above example, no matter how rack-delay specified/computed, if we can keep the node-delay to 40, we have better chance to get node-local containers allocated.

          It is true that we won't get good locality in this example. iiuc, we didn't get good locality before the patch either. i.e. canAssign() would return true for NODE-LOCAL and OFF-SWITCH without delay. With the patch, canAssign() will return true for NODE-LOCAL, RACK-LOCAL, and OFF-SWITCH without delay. I believe the original intent of using localityWaitFactor was to avoid delaying small resource asks (could be a small job, or could be the tail of a large job). Unfortunately the algorithm still delayed RACK-LOCAL assignments. This made no sense to me - Accept OFF-SWITCH without delay, yet don't accept RACK-LOCAL?? I agree that we could change things here to get better locality for small requests, but to me this could have significant impact on small job latency so it would make me nervous to do so as part of this jira.

          3. Don't restore missed-opportunity if rack-local container allocated.
          The benefit of this change is obvious - we can get faster rack-local container allocation. But I feel this can also affect node-local container allocation (If the application asks only a small subset of nodes in a rack), may lead to some performance regression for locality I/O sensitive applications.

          You're correct that it can affect node local container allocation. I will make this behavior configurable. The reason I didn't in the first place was that I felt the circumstances where we lose out are rare (not currently getting NODE-LOCAL assignments because otherwise missedOpportunities resets, AND not getting OFF-SWITCH assignments because missedOpportunities doesn't reset for OFF-SWITCH so it will quickly allocated everything to OFF-SWITCH as soon as it hits that threshold). On the other hand, the effects of not doing it are dramatic. We have been having cases where 5% of NMs are down for maintenance and some jobs take about an order of magnitude longer to run than normal.

          So, here are the changes I propose:
          1) I need to change the way rackLocalityDelay is specified because it doesn't handle the case where the configuration value is larger than the cluster size. I was thinking of just scaling it. Let's say node-locality-delay=5000, rack-locality-delay=5100, cluster_size is 3000. In the existing code, node-locality-delay would automatically get lowered to 3000. Instead, it will lower rack-locality-delay to 3000, and node-locality-delay will be proportionally adjusted (5000 * 3000 / 5100) = 2941.
          2) Add a configurable boolean that controls whether a rack-local assignment resets missed_opportunities to 0 (old behavior), OR node-locality-delay (new behavior). Default of new behavior.

          Let me know what you think of that approach.

          Show
          nroberts Nathan Roberts added a comment - Thanks Wangda Tan for the comments. 2. node-delay = min(rack-delay, node-delay). If a cluster has 40 nodes, user requests 3 containers on node1: Assume the configured-rack-delay=50, rack-delay = min(3 (#requested-container) * 1 (#requested-resource-name) / 40, 50) = 0. So: node-delay = min(rack-delay, 40) = 0 In above example, no matter how rack-delay specified/computed, if we can keep the node-delay to 40, we have better chance to get node-local containers allocated. It is true that we won't get good locality in this example. iiuc, we didn't get good locality before the patch either. i.e. canAssign() would return true for NODE-LOCAL and OFF-SWITCH without delay. With the patch, canAssign() will return true for NODE-LOCAL, RACK-LOCAL, and OFF-SWITCH without delay. I believe the original intent of using localityWaitFactor was to avoid delaying small resource asks (could be a small job, or could be the tail of a large job). Unfortunately the algorithm still delayed RACK-LOCAL assignments. This made no sense to me - Accept OFF-SWITCH without delay, yet don't accept RACK-LOCAL?? I agree that we could change things here to get better locality for small requests, but to me this could have significant impact on small job latency so it would make me nervous to do so as part of this jira. 3. Don't restore missed-opportunity if rack-local container allocated. The benefit of this change is obvious - we can get faster rack-local container allocation. But I feel this can also affect node-local container allocation (If the application asks only a small subset of nodes in a rack), may lead to some performance regression for locality I/O sensitive applications. You're correct that it can affect node local container allocation. I will make this behavior configurable. The reason I didn't in the first place was that I felt the circumstances where we lose out are rare (not currently getting NODE-LOCAL assignments because otherwise missedOpportunities resets, AND not getting OFF-SWITCH assignments because missedOpportunities doesn't reset for OFF-SWITCH so it will quickly allocated everything to OFF-SWITCH as soon as it hits that threshold). On the other hand, the effects of not doing it are dramatic. We have been having cases where 5% of NMs are down for maintenance and some jobs take about an order of magnitude longer to run than normal. So, here are the changes I propose: 1) I need to change the way rackLocalityDelay is specified because it doesn't handle the case where the configuration value is larger than the cluster size. I was thinking of just scaling it. Let's say node-locality-delay=5000, rack-locality-delay=5100, cluster_size is 3000. In the existing code, node-locality-delay would automatically get lowered to 3000. Instead, it will lower rack-locality-delay to 3000, and node-locality-delay will be proportionally adjusted (5000 * 3000 / 5100) = 2941. 2) Add a configurable boolean that controls whether a rack-local assignment resets missed_opportunities to 0 (old behavior), OR node-locality-delay (new behavior). Default of new behavior. Let me know what you think of that approach.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Thanks for sharing your thoughts, Nathan Roberts!

          iiuc, we didn't get good locality before the patch either. i.e. canAssign() would return true for NODE-LOCAL and OFF-SWITCH without delay.

          Yes, you're correct, I think we can safely use min(computed-offswitch, configured-offswitch) as final offswitch/rack delay.

          1) I need to change the way rackLocalityDelay is specified because it doesn't handle the case where the configuration value is larger than the cluster size. I was thinking of just scaling it. Let's say node-locality-delay=5000, rack-locality-delay=5100, cluster_size is 3000. In the existing code, node-locality-delay would automatically get lowered to 3000. Instead, it will lower rack-locality-delay to 3000, and node-locality-delay will be proportionally adjusted (5000 * 3000 / 5100) = 2941.

          I think instead of scaling, I suggest to simply cap rack/offswitch delay by the cluster size, so:

          • rack-delay = min(offswitch, node-locality-delay, cluserSize)
          • offswitch-delay = min(offswitch, clusterSize)
            The scaling behavior could be hard to explain to end users.

          2) Add a configurable boolean that controls whether a rack-local assignment resets missed_opportunities to 0 (old behavior), OR node-locality-delay (new behavior). Default of new behavior.

          This is fine to me since this is a configurable item and you have done tests for this change already.

          Show
          leftnoteasy Wangda Tan added a comment - Thanks for sharing your thoughts, Nathan Roberts ! iiuc, we didn't get good locality before the patch either. i.e. canAssign() would return true for NODE-LOCAL and OFF-SWITCH without delay. Yes, you're correct, I think we can safely use min(computed-offswitch, configured-offswitch) as final offswitch/rack delay. 1) I need to change the way rackLocalityDelay is specified because it doesn't handle the case where the configuration value is larger than the cluster size. I was thinking of just scaling it. Let's say node-locality-delay=5000, rack-locality-delay=5100, cluster_size is 3000. In the existing code, node-locality-delay would automatically get lowered to 3000. Instead, it will lower rack-locality-delay to 3000, and node-locality-delay will be proportionally adjusted (5000 * 3000 / 5100) = 2941. I think instead of scaling, I suggest to simply cap rack/offswitch delay by the cluster size, so: rack-delay = min(offswitch, node-locality-delay, cluserSize) offswitch-delay = min(offswitch, clusterSize) The scaling behavior could be hard to explain to end users. 2) Add a configurable boolean that controls whether a rack-local assignment resets missed_opportunities to 0 (old behavior), OR node-locality-delay (new behavior). Default of new behavior. This is fine to me since this is a configurable item and you have done tests for this change already.
          Hide
          nroberts Nathan Roberts added a comment -

          Thanks Wangda Tan for the quick responses.

          I think instead of scaling, I suggest to simply cap rack/offswitch delay by the cluster size, so:

          rack-delay = min(offswitch, node-locality-delay, cluserSize)
          offswitch-delay = min(offswitch, clusterSize)
          The scaling behavior could be hard to explain to end users.

          I agree that it's not as easy to describe. BUT, the problem I have is that I don't know how to deal with the common case of someone wanting node-locality-delay to be based on the size of the cluster. What we do is set node-locality-delay to something guaranteed to be larger than the cluster, knowing the scheduler will automatically lower it to the size of the cluster. This works great for a single delay on any size cluster. However, it's impossible to describe two different delays using this same approach. For example, I might always want node-locality-delay to be 10% less than rack-locality-delay. Maybe we should specify rack-locality-delay as a percentage above node-locality-delay ( 10%)? Still a little hard to describe though.

          Show
          nroberts Nathan Roberts added a comment - Thanks Wangda Tan for the quick responses. I think instead of scaling, I suggest to simply cap rack/offswitch delay by the cluster size, so: rack-delay = min(offswitch, node-locality-delay, cluserSize) offswitch-delay = min(offswitch, clusterSize) The scaling behavior could be hard to explain to end users. I agree that it's not as easy to describe. BUT, the problem I have is that I don't know how to deal with the common case of someone wanting node-locality-delay to be based on the size of the cluster. What we do is set node-locality-delay to something guaranteed to be larger than the cluster, knowing the scheduler will automatically lower it to the size of the cluster. This works great for a single delay on any size cluster. However, it's impossible to describe two different delays using this same approach. For example, I might always want node-locality-delay to be 10% less than rack-locality-delay. Maybe we should specify rack-locality-delay as a percentage above node-locality-delay ( 10%)? Still a little hard to describe though.
          Hide
          nroberts Nathan Roberts added a comment -

          V4 of patch.

          • I moved the calculation of locality delays out of canAssign() since this is a very hot path and the answer only changes when the size of the cluster changes. This caused a few unit tests to start failing because the number of nodes in the cluster was not always being mocked at the right time causing the LocalityDelays to be 0 which confused some of the assumptions.
          • I left the scaling approach in, but am willing to move to a rack-locality-delay that is specified as a percent. I absolutely want a node-locality-delay set to 5000, rack-locality-delay set to 5100, do something intelligent on a 3000 node cluster.
          • One argument for sticking with the scaling approach is the fact that we basically do it today in a simpler fashion. If you specify node-locality-delay of 5000 on a 3000 node cluster, it gets automatically scaled down to 3000 without informing the user. So I'd say scale it but don't try to explain it in user documentation.
          • Updated the documentation
          Show
          nroberts Nathan Roberts added a comment - V4 of patch. I moved the calculation of locality delays out of canAssign() since this is a very hot path and the answer only changes when the size of the cluster changes. This caused a few unit tests to start failing because the number of nodes in the cluster was not always being mocked at the right time causing the LocalityDelays to be 0 which confused some of the assumptions. I left the scaling approach in, but am willing to move to a rack-locality-delay that is specified as a percent. I absolutely want a node-locality-delay set to 5000, rack-locality-delay set to 5100, do something intelligent on a 3000 node cluster. One argument for sticking with the scaling approach is the fact that we basically do it today in a simpler fashion. If you specify node-locality-delay of 5000 on a 3000 node cluster, it gets automatically scaled down to 3000 without informing the user. So I'd say scale it but don't try to explain it in user documentation. Updated the documentation
          Hide
          nroberts Nathan Roberts added a comment -

          Wangda Tan, Another very simple approach is to just not reset schedulingOpportunities when we allocate a RACK_LOCAL container. This isn't quite as flexible, but might be fine for almost all use cases (Even today, we degrade into OFF_SWITCH at the same threshold or earlier, and will continue to schedule OFF_SWITCH without delay).
          YARN-4287-minimal.patch does this.

          Let me know your thoughts.

          Show
          nroberts Nathan Roberts added a comment - Wangda Tan , Another very simple approach is to just not reset schedulingOpportunities when we allocate a RACK_LOCAL container. This isn't quite as flexible, but might be fine for almost all use cases (Even today, we degrade into OFF_SWITCH at the same threshold or earlier, and will continue to schedule OFF_SWITCH without delay). YARN-4287 -minimal.patch does this. Let me know your thoughts.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Nathan Roberts,

          One argument for sticking with the scaling approach is the fact that we basically do it today in a simpler fashion. If you specify node-locality-delay of 5000 on a 3000 node cluster, it gets automatically scaled down to 3000 without informing the user. So I'd say scale it but don't try to explain it in user documentation.

          I still think scaling down is not a straightforward way to support the problem you mentioned (user isn't clear about size of cluster). Instead, I think we can use percentage. User can say, I want node locality delay to 300 OR 10% of cluster size. And same to rack locality delay. Scheduler will compute what's the actual delay at runtime. With this, I think we can safely cap delay by cluster size. Does this make sense to you?

          Thanks,

          Show
          leftnoteasy Wangda Tan added a comment - Hi Nathan Roberts , One argument for sticking with the scaling approach is the fact that we basically do it today in a simpler fashion. If you specify node-locality-delay of 5000 on a 3000 node cluster, it gets automatically scaled down to 3000 without informing the user. So I'd say scale it but don't try to explain it in user documentation. I still think scaling down is not a straightforward way to support the problem you mentioned (user isn't clear about size of cluster). Instead, I think we can use percentage. User can say, I want node locality delay to 300 OR 10% of cluster size. And same to rack locality delay. Scheduler will compute what's the actual delay at runtime. With this, I think we can safely cap delay by cluster size. Does this make sense to you? Thanks,
          Hide
          nroberts Nathan Roberts added a comment -

          +1 on percentages. My only concern is that node-locality-delay is already in there and is not a percentage. I can deprecate the existing node-locality-delay and add the percentage based configs.

          Show
          nroberts Nathan Roberts added a comment - +1 on percentages. My only concern is that node-locality-delay is already in there and is not a percentage. I can deprecate the existing node-locality-delay and add the percentage based configs.
          Hide
          leftnoteasy Wangda Tan added a comment -

          I think maybe it's better not deprecate original option, we can support both in the same option. Just like html set element size, you can set either px or percentage of parent's width/height.

          Show
          leftnoteasy Wangda Tan added a comment - I think maybe it's better not deprecate original option, we can support both in the same option. Just like html set element size, you can set either px or percentage of parent's width/height.
          Hide
          mding MENG DING added a comment -

          Looking at this issue, I have to admit that I had been frustrated with the existing getLocalityWaitFactor, and had the same question as Nathan Roberts:

          This made no sense to me - Accept OFF-SWITCH without delay, yet don't accept RACK-LOCAL??

          IMHO, although it makes sense to introduce a configurable rack-locality delay, it doesn't help when the cluster is really busy as described in YARN-4189 and YARN-3309. As an interim solution, I am in favor of the YARN-4287-minimal.patch, but I think the default configuration of DEFAULT_RACK_LOCALITY_FULL_RESET should be set to true to be backward compatible.

          Show
          mding MENG DING added a comment - Looking at this issue, I have to admit that I had been frustrated with the existing getLocalityWaitFactor , and had the same question as Nathan Roberts : This made no sense to me - Accept OFF-SWITCH without delay, yet don't accept RACK-LOCAL?? IMHO, although it makes sense to introduce a configurable rack-locality delay, it doesn't help when the cluster is really busy as described in YARN-4189 and YARN-3309 . As an interim solution, I am in favor of the YARN-4287 -minimal.patch, but I think the default configuration of DEFAULT_RACK_LOCALITY_FULL_RESET should be set to true to be backward compatible.
          Hide
          leftnoteasy Wangda Tan added a comment -

          I'm fine with either direction, but for the 4287-minimal.patch, I suggest to cap the rack-local-delay to cluster size to avoid off-switch requests wait for too long when the request needs lots of containers.

          Show
          leftnoteasy Wangda Tan added a comment - I'm fine with either direction, but for the 4287-minimal.patch, I suggest to cap the rack-local-delay to cluster size to avoid off-switch requests wait for too long when the request needs lots of containers.
          Hide
          mding MENG DING added a comment -

          Agreed.

          Show
          mding MENG DING added a comment - Agreed.
          Hide
          nroberts Nathan Roberts added a comment -

          Wangda Tan, MENG DING - Thanks for the comments. I uploaded version of minimal patch that limits offswitch delay to cluster size and defaults full reset to true.

          Show
          nroberts Nathan Roberts added a comment - Wangda Tan , MENG DING - Thanks for the comments. I uploaded version of minimal patch that limits offswitch delay to cluster size and defaults full reset to true.
          Hide
          nroberts Nathan Roberts added a comment -

          Noticed simple spelling error

          Show
          nroberts Nathan Roberts added a comment - Noticed simple spelling error
          Hide
          leftnoteasy Wangda Tan added a comment -

          Thanks for update, Nathan Roberts.

          Patch generally looks good, few comments:

          • Could you add a comment at
                  return (Math.min(rmContext.getScheduler().getNumClusterNodes(), 
                      (requiredContainers * localityWaitFactor)) < missedOpportunities);
            

            People read the code can get better understanding that why missedOpportunity need to be capped by numClusterNodes

          • I would suggest to add tests for missedOpportunity capped by numClusterNodes and resetSchedulingOpportunity for rack request.
          Show
          leftnoteasy Wangda Tan added a comment - Thanks for update, Nathan Roberts . Patch generally looks good, few comments: Could you add a comment at return ( Math .min(rmContext.getScheduler().getNumClusterNodes(), (requiredContainers * localityWaitFactor)) < missedOpportunities); People read the code can get better understanding that why missedOpportunity need to be capped by numClusterNodes I would suggest to add tests for missedOpportunity capped by numClusterNodes and resetSchedulingOpportunity for rack request.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 11s docker + precommit patch detected.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 3m 48s trunk passed
          +1 compile 0m 31s trunk passed with JDK v1.8.0_66
          +1 compile 0m 28s trunk passed with JDK v1.7.0_79
          +1 checkstyle 0m 15s trunk passed
          +1 mvneclipse 0m 17s trunk passed
          +1 findbugs 1m 25s trunk passed
          +1 javadoc 0m 31s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 34s trunk passed with JDK v1.7.0_79
          +1 mvninstall 0m 34s the patch passed
          +1 compile 0m 30s the patch passed with JDK v1.8.0_66
          +1 javac 0m 30s the patch passed
          +1 compile 0m 29s the patch passed with JDK v1.7.0_79
          +1 javac 0m 29s the patch passed
          -1 checkstyle 0m 15s Patch generated 4 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 198, now 202).
          +1 mvneclipse 0m 17s the patch passed
          -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 findbugs 1m 36s the patch passed
          +1 javadoc 0m 32s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 32s the patch passed with JDK v1.7.0_79
          -1 unit 68m 10s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 68m 41s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_79.
          +1 asflicense 0m 29s Patch does not generate ASF License warnings.
          151m 16s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
          JDK v1.7.0_79 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization



          Subsystem Report/Notes
          Docker Client=1.7.0 Server=1.7.0 Image:test-patch-base-hadoop-date2015-11-09
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12771428/YARN-4287-minimal-v3.patch
          JIRA Issue YARN-4287
          Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile
          uname Linux 3182d018451a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/apache-yetus-ee5baeb/precommit/personality/hadoop.sh
          git revision trunk / 8fbea53
          Default Java 1.7.0_79
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9646/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/9646/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9646/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9646/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_79.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9646/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9646/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_79.txt
          JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9646/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 226MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9646/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 11s docker + precommit patch detected. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 3m 48s trunk passed +1 compile 0m 31s trunk passed with JDK v1.8.0_66 +1 compile 0m 28s trunk passed with JDK v1.7.0_79 +1 checkstyle 0m 15s trunk passed +1 mvneclipse 0m 17s trunk passed +1 findbugs 1m 25s trunk passed +1 javadoc 0m 31s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 34s trunk passed with JDK v1.7.0_79 +1 mvninstall 0m 34s the patch passed +1 compile 0m 30s the patch passed with JDK v1.8.0_66 +1 javac 0m 30s the patch passed +1 compile 0m 29s the patch passed with JDK v1.7.0_79 +1 javac 0m 29s the patch passed -1 checkstyle 0m 15s Patch generated 4 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 198, now 202). +1 mvneclipse 0m 17s the patch passed -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 findbugs 1m 36s the patch passed +1 javadoc 0m 32s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 32s the patch passed with JDK v1.7.0_79 -1 unit 68m 10s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 68m 41s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_79. +1 asflicense 0m 29s Patch does not generate ASF License warnings. 151m 16s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_79 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Client=1.7.0 Server=1.7.0 Image:test-patch-base-hadoop-date2015-11-09 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12771428/YARN-4287-minimal-v3.patch JIRA Issue YARN-4287 Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile uname Linux 3182d018451a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/apache-yetus-ee5baeb/precommit/personality/hadoop.sh git revision trunk / 8fbea53 Default Java 1.7.0_79 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9646/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/9646/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9646/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9646/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_79.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9646/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9646/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_79.txt JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9646/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 226MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9646/console This message was automatically generated.
          Hide
          nroberts Nathan Roberts added a comment -

          Thanks Wangda Tan for the comments. Made the following changes:

          • Added comments about capping off_switch delay to number of nodes in cluster
          • Added test case to verify we continue to allocate RACK_LOCAL if full_reset is false.
          • Added test case to verify we do reset schedulingOpportunities when full_reset is true (today's behavior)
          • Added test case to verify we cap OFF_SWITCH delay to number of nodes in cluster
          Show
          nroberts Nathan Roberts added a comment - Thanks Wangda Tan for the comments. Made the following changes: Added comments about capping off_switch delay to number of nodes in cluster Added test case to verify we continue to allocate RACK_LOCAL if full_reset is false. Added test case to verify we do reset schedulingOpportunities when full_reset is true (today's behavior) Added test case to verify we cap OFF_SWITCH delay to number of nodes in cluster
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 9s docker + precommit patch detected.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          +1 mvninstall 3m 56s trunk passed
          +1 compile 0m 30s trunk passed with JDK v1.8.0_60
          +1 compile 0m 29s trunk passed with JDK v1.7.0_79
          +1 checkstyle 0m 16s trunk passed
          +1 mvneclipse 0m 18s trunk passed
          +1 findbugs 1m 27s trunk passed
          +1 javadoc 0m 32s trunk passed with JDK v1.8.0_60
          +1 javadoc 0m 33s trunk passed with JDK v1.7.0_79
          +1 mvninstall 0m 33s the patch passed
          +1 compile 0m 30s the patch passed with JDK v1.8.0_60
          +1 javac 0m 30s the patch passed
          +1 compile 0m 29s the patch passed with JDK v1.7.0_79
          +1 javac 0m 29s the patch passed
          -1 checkstyle 0m 14s Patch generated 4 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 198, now 202).
          +1 mvneclipse 0m 18s the patch passed
          -1 whitespace 0m 0s The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 findbugs 1m 38s the patch passed
          +1 javadoc 0m 31s the patch passed with JDK v1.8.0_60
          +1 javadoc 0m 33s the patch passed with JDK v1.7.0_79
          -1 unit 64m 39s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_60.
          -1 unit 65m 56s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_79.
          +1 asflicense 0m 37s Patch does not generate ASF License warnings.
          145m 25s



          Reason Tests
          JDK v1.8.0_60 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestClientRMService
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
          JDK v1.7.0_79 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization



          Subsystem Report/Notes
          Docker Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-11-10
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12771599/YARN-4287-minimal-v4.patch
          JIRA Issue YARN-4287
          Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile
          uname Linux d591154fe1b4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/apache-yetus-ee5baeb/precommit/personality/hadoop.sh
          git revision trunk / 493e8ae
          Default Java 1.7.0_79
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_60 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9650/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/9650/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9650/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_60.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9650/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_79.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9650/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-YARN-Build/9650/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_79.txt
          JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9650/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 228MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9650/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 9s docker + precommit patch detected. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 3m 56s trunk passed +1 compile 0m 30s trunk passed with JDK v1.8.0_60 +1 compile 0m 29s trunk passed with JDK v1.7.0_79 +1 checkstyle 0m 16s trunk passed +1 mvneclipse 0m 18s trunk passed +1 findbugs 1m 27s trunk passed +1 javadoc 0m 32s trunk passed with JDK v1.8.0_60 +1 javadoc 0m 33s trunk passed with JDK v1.7.0_79 +1 mvninstall 0m 33s the patch passed +1 compile 0m 30s the patch passed with JDK v1.8.0_60 +1 javac 0m 30s the patch passed +1 compile 0m 29s the patch passed with JDK v1.7.0_79 +1 javac 0m 29s the patch passed -1 checkstyle 0m 14s Patch generated 4 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 198, now 202). +1 mvneclipse 0m 18s the patch passed -1 whitespace 0m 0s The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 findbugs 1m 38s the patch passed +1 javadoc 0m 31s the patch passed with JDK v1.8.0_60 +1 javadoc 0m 33s the patch passed with JDK v1.7.0_79 -1 unit 64m 39s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_60. -1 unit 65m 56s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_79. +1 asflicense 0m 37s Patch does not generate ASF License warnings. 145m 25s Reason Tests JDK v1.8.0_60 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestClientRMService   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_79 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-11-10 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12771599/YARN-4287-minimal-v4.patch JIRA Issue YARN-4287 Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile uname Linux d591154fe1b4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/apache-yetus-ee5baeb/precommit/personality/hadoop.sh git revision trunk / 493e8ae Default Java 1.7.0_79 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_60 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9650/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/9650/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9650/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_60.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9650/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_79.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9650/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-YARN-Build/9650/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_79.txt JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9650/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 228MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9650/console This message was automatically generated.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Thanks Nathan Roberts
          Patch looks good +1, will commit in a few days if no opposite opinions.

          Show
          leftnoteasy Wangda Tan added a comment - Thanks Nathan Roberts Patch looks good +1, will commit in a few days if no opposite opinions.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Committed to trunk/branch-2, thanks Nathan Roberts and review from MENG DING!

          Show
          leftnoteasy Wangda Tan added a comment - Committed to trunk/branch-2, thanks Nathan Roberts and review from MENG DING !
          Hide
          leftnoteasy Wangda Tan added a comment -

          And Nathan Roberts, do you think if it should be committed to 2.7 also?

          Show
          leftnoteasy Wangda Tan added a comment - And Nathan Roberts , do you think if it should be committed to 2.7 also?
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #8798 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8798/)
          YARN-4287. Capacity Scheduler: Rack Locality improvement (Nathan Roberts (wangda: rev 796638d9bc86235b9f3e5d1a3a9a25bbf5c04d1c)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestQueueMetrics.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #8798 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8798/ ) YARN-4287 . Capacity Scheduler: Rack Locality improvement (Nathan Roberts (wangda: rev 796638d9bc86235b9f3e5d1a3a9a25bbf5c04d1c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestQueueMetrics.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #1399 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/1399/)
          YARN-4287. Capacity Scheduler: Rack Locality improvement (Nathan Roberts (wangda: rev 796638d9bc86235b9f3e5d1a3a9a25bbf5c04d1c)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestQueueMetrics.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #1399 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/1399/ ) YARN-4287 . Capacity Scheduler: Rack Locality improvement (Nathan Roberts (wangda: rev 796638d9bc86235b9f3e5d1a3a9a25bbf5c04d1c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestQueueMetrics.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java hadoop-yarn-project/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #675 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/675/)
          YARN-4287. Capacity Scheduler: Rack Locality improvement (Nathan Roberts (wangda: rev 796638d9bc86235b9f3e5d1a3a9a25bbf5c04d1c)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestQueueMetrics.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #675 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/675/ ) YARN-4287 . Capacity Scheduler: Rack Locality improvement (Nathan Roberts (wangda: rev 796638d9bc86235b9f3e5d1a3a9a25bbf5c04d1c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestQueueMetrics.java
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #662 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/662/)
          YARN-4287. Capacity Scheduler: Rack Locality improvement (Nathan Roberts (wangda: rev 796638d9bc86235b9f3e5d1a3a9a25bbf5c04d1c)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestQueueMetrics.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #662 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/662/ ) YARN-4287 . Capacity Scheduler: Rack Locality improvement (Nathan Roberts (wangda: rev 796638d9bc86235b9f3e5d1a3a9a25bbf5c04d1c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestQueueMetrics.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #2603 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2603/)
          YARN-4287. Capacity Scheduler: Rack Locality improvement (Nathan Roberts (wangda: rev 796638d9bc86235b9f3e5d1a3a9a25bbf5c04d1c)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestQueueMetrics.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2603 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2603/ ) YARN-4287 . Capacity Scheduler: Rack Locality improvement (Nathan Roberts (wangda: rev 796638d9bc86235b9f3e5d1a3a9a25bbf5c04d1c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestQueueMetrics.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #601 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/601/)
          YARN-4287. Capacity Scheduler: Rack Locality improvement (Nathan Roberts (wangda: rev 796638d9bc86235b9f3e5d1a3a9a25bbf5c04d1c)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestQueueMetrics.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #601 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/601/ ) YARN-4287 . Capacity Scheduler: Rack Locality improvement (Nathan Roberts (wangda: rev 796638d9bc86235b9f3e5d1a3a9a25bbf5c04d1c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestQueueMetrics.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          Hide
          jlowe Jason Lowe added a comment -

          do you think if it should be committed to 2.7 also?

          +1 for committing this to 2.7.

          Show
          jlowe Jason Lowe added a comment - do you think if it should be committed to 2.7 also? +1 for committing this to 2.7.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Jason Lowe, Thanks for comment,
          Added 2.7.3 to target version.

          Show
          leftnoteasy Wangda Tan added a comment - Jason Lowe , Thanks for comment, Added 2.7.3 to target version.
          Hide
          nroberts Nathan Roberts added a comment -

          I will put up a 2.7 version tomorrow morning.

          Show
          nroberts Nathan Roberts added a comment - I will put up a 2.7 version tomorrow morning.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #2539 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2539/)
          YARN-4287. Capacity Scheduler: Rack Locality improvement (Nathan Roberts (wangda: rev 796638d9bc86235b9f3e5d1a3a9a25bbf5c04d1c)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestQueueMetrics.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2539 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2539/ ) YARN-4287 . Capacity Scheduler: Rack Locality improvement (Nathan Roberts (wangda: rev 796638d9bc86235b9f3e5d1a3a9a25bbf5c04d1c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestQueueMetrics.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java hadoop-yarn-project/CHANGES.txt
          Hide
          nroberts Nathan Roberts added a comment -

          2.7 version of patch.

          Show
          nroberts Nathan Roberts added a comment - 2.7 version of patch.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Thanks for update Nathan Roberts, I tried this patch on 2.7, all CS tests passed with this patch. I will commit this to branch-2.7 today if no opposite opinions.

          Show
          leftnoteasy Wangda Tan added a comment - Thanks for update Nathan Roberts , I tried this patch on 2.7, all CS tests passed with this patch. I will commit this to branch-2.7 today if no opposite opinions.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Committed to branch-2.7 and updated CHANGES.txt of branch-2/trunk. Thanks Nathan Roberts and review from Jason Lowe.

          Show
          leftnoteasy Wangda Tan added a comment - Committed to branch-2.7 and updated CHANGES.txt of branch-2/trunk. Thanks Nathan Roberts and review from Jason Lowe .
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #8823 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8823/)
          move fix version of YARN-4287 from 2.8.0 to 2.7.3 (wangda: rev 23a130abd7f26ca95d7e94988c7bc45c6d419d0f)

          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #8823 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8823/ ) move fix version of YARN-4287 from 2.8.0 to 2.7.3 (wangda: rev 23a130abd7f26ca95d7e94988c7bc45c6d419d0f) hadoop-yarn-project/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #681 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/681/)
          move fix version of YARN-4287 from 2.8.0 to 2.7.3 (wangda: rev 23a130abd7f26ca95d7e94988c7bc45c6d419d0f)

          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #681 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/681/ ) move fix version of YARN-4287 from 2.8.0 to 2.7.3 (wangda: rev 23a130abd7f26ca95d7e94988c7bc45c6d419d0f) hadoop-yarn-project/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #2622 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2622/)
          move fix version of YARN-4287 from 2.8.0 to 2.7.3 (wangda: rev 23a130abd7f26ca95d7e94988c7bc45c6d419d0f)

          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2622 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2622/ ) move fix version of YARN-4287 from 2.8.0 to 2.7.3 (wangda: rev 23a130abd7f26ca95d7e94988c7bc45c6d419d0f) hadoop-yarn-project/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #1420 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/1420/)
          move fix version of YARN-4287 from 2.8.0 to 2.7.3 (wangda: rev 23a130abd7f26ca95d7e94988c7bc45c6d419d0f)

          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #1420 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/1420/ ) move fix version of YARN-4287 from 2.8.0 to 2.7.3 (wangda: rev 23a130abd7f26ca95d7e94988c7bc45c6d419d0f) hadoop-yarn-project/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #693 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/693/)
          move fix version of YARN-4287 from 2.8.0 to 2.7.3 (wangda: rev 23a130abd7f26ca95d7e94988c7bc45c6d419d0f)

          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #693 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/693/ ) move fix version of YARN-4287 from 2.8.0 to 2.7.3 (wangda: rev 23a130abd7f26ca95d7e94988c7bc45c6d419d0f) hadoop-yarn-project/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #617 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/617/)
          move fix version of YARN-4287 from 2.8.0 to 2.7.3 (wangda: rev 23a130abd7f26ca95d7e94988c7bc45c6d419d0f)

          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #617 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/617/ ) move fix version of YARN-4287 from 2.8.0 to 2.7.3 (wangda: rev 23a130abd7f26ca95d7e94988c7bc45c6d419d0f) hadoop-yarn-project/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #2555 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2555/)
          move fix version of YARN-4287 from 2.8.0 to 2.7.3 (wangda: rev 23a130abd7f26ca95d7e94988c7bc45c6d419d0f)

          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2555 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2555/ ) move fix version of YARN-4287 from 2.8.0 to 2.7.3 (wangda: rev 23a130abd7f26ca95d7e94988c7bc45c6d419d0f) hadoop-yarn-project/CHANGES.txt

            People

            • Assignee:
              nroberts Nathan Roberts
              Reporter:
              nroberts Nathan Roberts
            • Votes:
              0 Vote for this issue
              Watchers:
              15 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development