Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-2637

maximum-am-resource-percent could be respected for both LeafQueue/User when trying to activate applications.

    Details

    • Hadoop Flags:
      Reviewed

      Description

      Currently, number of AM in leaf queue will be calculated in following way:

      max_am_resource = queue_max_capacity * maximum_am_resource_percent
      #max_am_number = max_am_resource / minimum_allocation
      #max_am_number_for_each_user = #max_am_number * userlimit * userlimit_factor
      

      And when submit new application to RM, it will check if an app can be activated in following way:

          for (Iterator<FiCaSchedulerApp> i=pendingApplications.iterator(); 
               i.hasNext(); ) {
            FiCaSchedulerApp application = i.next();
            
            // Check queue limit
            if (getNumActiveApplications() >= getMaximumActiveApplications()) {
              break;
            }
            
            // Check user limit
            User user = getUser(application.getUser());
            if (user.getActiveApplications() < getMaximumActiveApplicationsPerUser()) {
              user.activateApplication();
              activeApplications.add(application);
              i.remove();
              LOG.info("Application " + application.getApplicationId() +
                  " from user: " + application.getUser() + 
                  " activated in queue: " + getQueueName());
            }
          }
      

      An example is,
      If a queue has capacity = 1G, max_am_resource_percent = 0.2, the maximum resource that AM can use is 200M, assuming minimum_allocation=1M, #am can be launched is 200, and if user uses 5M for each AM (> minimum_allocation). All apps can still be activated, and it will occupy all resource of a queue instead of only a max_am_resource_percent of a queue.

      1. YARN-2637.0.patch
        11 kB
        Craig Welch
      2. YARN-2637.1.patch
        12 kB
        Craig Welch
      3. YARN-2637.12.patch
        42 kB
        Craig Welch
      4. YARN-2637.13.patch
        42 kB
        Craig Welch
      5. YARN-2637.15.patch
        28 kB
        Craig Welch
      6. YARN-2637.16.patch
        30 kB
        Craig Welch
      7. YARN-2637.17.patch
        31 kB
        Craig Welch
      8. YARN-2637.18.patch
        32 kB
        Craig Welch
      9. YARN-2637.19.patch
        33 kB
        Craig Welch
      10. YARN-2637.2.patch
        11 kB
        Craig Welch
      11. YARN-2637.20.patch
        26 kB
        Craig Welch
      12. YARN-2637.21.patch
        19 kB
        Craig Welch
      13. YARN-2637.22.patch
        35 kB
        Craig Welch
      14. YARN-2637.23.patch
        35 kB
        Craig Welch
      15. YARN-2637.25.patch
        21 kB
        Craig Welch
      16. YARN-2637.26.patch
        23 kB
        Craig Welch
      17. YARN-2637.27.patch
        45 kB
        Craig Welch
      18. YARN-2637.28.patch
        46 kB
        Craig Welch
      19. YARN-2637.29.patch
        48 kB
        Craig Welch
      20. YARN-2637.30.patch
        52 kB
        Craig Welch
      21. YARN-2637.31.patch
        76 kB
        Craig Welch
      22. YARN-2637.32.patch
        77 kB
        Craig Welch
      23. YARN-2637.36.patch
        79 kB
        Craig Welch
      24. YARN-2637.38.patch
        81 kB
        Craig Welch
      25. YARN-2637.39.patch
        81 kB
        Craig Welch
      26. YARN-2637.40.patch
        84 kB
        Craig Welch
      27. YARN-2637.6.patch
        35 kB
        Craig Welch
      28. YARN-2637.7.patch
        36 kB
        Craig Welch
      29. YARN-2637.9.patch
        39 kB
        Craig Welch

        Issue Links

          Activity

          Hide
          cwelch Craig Welch added a comment -

          I think the fix is fairly straightforward - there is an "amResource" property on the SchedulerApplicationAttempt / FiCaSchedulerApp, it does not appear to be being populated in the CapacityScheduler case (but it should be, and the information is available in the submission / from the resource requests of the appliction) - populate this value, and then add a Resource property to LeafQueue which represents the resources used by active application masters - when an application starts, add it's amResource value to the LeafQueue's active application master resource value, when an application ends, remove it. Before starting an application compare the sum of the active application masters + the new application's resource to the resource represented by the percentage of cluster resource allowed to be used by am's in the queue (this can differ by queue...) and if it exceeds the value do not start the application. The existing trickle down logic base on the minimum allocation should be removed, there is also logic regarding how many applications can be running based on explicit configuration which should be retained.

          if ((queue.activeApplicationMasterResourceTotal + readyToStartApplication.applicationMasterResource) <= queue.portionOfClusterResourceAllowedForApplicatoinMaster * clusterResource && maxAllowedApplications < runningApplications + 1) {
            queue.startTheApp
          }
          
          Show
          cwelch Craig Welch added a comment - I think the fix is fairly straightforward - there is an "amResource" property on the SchedulerApplicationAttempt / FiCaSchedulerApp, it does not appear to be being populated in the CapacityScheduler case (but it should be, and the information is available in the submission / from the resource requests of the appliction) - populate this value, and then add a Resource property to LeafQueue which represents the resources used by active application masters - when an application starts, add it's amResource value to the LeafQueue's active application master resource value, when an application ends, remove it. Before starting an application compare the sum of the active application masters + the new application's resource to the resource represented by the percentage of cluster resource allowed to be used by am's in the queue (this can differ by queue...) and if it exceeds the value do not start the application. The existing trickle down logic base on the minimum allocation should be removed, there is also logic regarding how many applications can be running based on explicit configuration which should be retained. if ((queue.activeApplicationMasterResourceTotal + readyToStartApplication.applicationMasterResource) <= queue.portionOfClusterResourceAllowedForApplicatoinMaster * clusterResource && maxAllowedApplications < runningApplications + 1) { queue.startTheApp }
          Hide
          cwelch Craig Welch added a comment -

          Attaching a roughish but I think serviceable work in progress patch - based on manual testing/checking the logs it looks to work as it should - still need to write some unit tests & validate it against the existing tests...

          Show
          cwelch Craig Welch added a comment - Attaching a roughish but I think serviceable work in progress patch - based on manual testing/checking the logs it looks to work as it should - still need to write some unit tests & validate it against the existing tests...
          Hide
          cwelch Craig Welch added a comment -

          Had forgotten to remove the resource when the application finishes - updated patch which does. I think that this actually needs to be a per cluster (rather than a per-queue) limit, based on the name & the behavior most seem to expect - except that there can be a per-queue override to the value, and most other "values like it" end up being evaluated at the queue level. It seems as though either this should be a global value or possibly based on a portion of the cluster (perhaps the queues baseline portion of the cluster, then adjusted). Most likely, the right approach is to make the "usedAMResources" a single per-cluster value by attaching it to the parent queue (so, abstract cs queue instance of the root queue) - which wouldn't be difficult - and then it would be per-cluster as it probably should be.

          Show
          cwelch Craig Welch added a comment - Had forgotten to remove the resource when the application finishes - updated patch which does. I think that this actually needs to be a per cluster (rather than a per-queue) limit, based on the name & the behavior most seem to expect - except that there can be a per-queue override to the value, and most other "values like it" end up being evaluated at the queue level. It seems as though either this should be a global value or possibly based on a portion of the cluster (perhaps the queues baseline portion of the cluster, then adjusted). Most likely, the right approach is to make the "usedAMResources" a single per-cluster value by attaching it to the parent queue (so, abstract cs queue instance of the root queue) - which wouldn't be difficult - and then it would be per-cluster as it probably should be.
          Hide
          cwelch Craig Welch added a comment -

          Go ahead and allow cores to be part of the am resource limit...

          Show
          cwelch Craig Welch added a comment - Go ahead and allow cores to be part of the am resource limit...
          Hide
          cwelch Craig Welch added a comment -

          Updated patch which passes existing unit tests in the resourcemanager/capacity scheduler area. Still has extra debug logging and needs unit tests specific to the change. Setting patch available to see if unit tests outside what I have checked are impacted/etc.

          Show
          cwelch Craig Welch added a comment - Updated patch which passes existing unit tests in the resourcemanager/capacity scheduler area. Still has extra debug logging and needs unit tests specific to the change. Setting patch available to see if unit tests outside what I have checked are impacted/etc.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12682825/YARN-2637.6.patch
          against trunk revision c298a9a.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 11 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.security.TestClientToAMTokens
          org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.TestRMContainerImpl
          org.apache.hadoop.yarn.server.resourcemanager.security.TestAMRMTokens
          org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart
          org.apache.hadoop.yarn.server.resourcemanager.TestApplicationCleanup
          org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCResponseId
          org.apache.hadoop.yarn.server.resourcemanager.TestResourceManager
          org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterService
          org.apache.hadoop.yarn.server.resourcemanager.scheduler.TestSchedulerUtils
          org.apache.hadoop.yarn.server.resourcemanager.TestAMAuthorization
          org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterLauncher
          org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCNodeUpdates
          org.apache.hadoop.yarn.server.resourcemanager.reservation.TestCapacitySchedulerPlanFollower
          org.apache.hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart
          org.apache.hadoop.yarn.server.resourcemanager.TestRM

          The following test timeouts occurred in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.TestClientRMService
          org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
          org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/5900//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5900//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12682825/YARN-2637.6.patch against trunk revision c298a9a. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 11 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.security.TestClientToAMTokens org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.TestRMContainerImpl org.apache.hadoop.yarn.server.resourcemanager.security.TestAMRMTokens org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart org.apache.hadoop.yarn.server.resourcemanager.TestApplicationCleanup org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCResponseId org.apache.hadoop.yarn.server.resourcemanager.TestResourceManager org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterService org.apache.hadoop.yarn.server.resourcemanager.scheduler.TestSchedulerUtils org.apache.hadoop.yarn.server.resourcemanager.TestAMAuthorization org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterLauncher org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCNodeUpdates org.apache.hadoop.yarn.server.resourcemanager.reservation.TestCapacitySchedulerPlanFollower org.apache.hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart org.apache.hadoop.yarn.server.resourcemanager.TestRM The following test timeouts occurred in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.TestClientRMService org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/5900//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5900//console This message is automatically generated.
          Hide
          djp Junping Du added a comment -

          Hi Craig Welch, thanks for your patch update. Could you please check the failed tests are related to your latest patch? Thanks!

          Show
          djp Junping Du added a comment - Hi Craig Welch , thanks for your patch update. Could you please check the failed tests are related to your latest patch? Thanks!
          Hide
          cwelch Craig Welch added a comment -

          Change which should fix most failing tests...

          Show
          cwelch Craig Welch added a comment - Change which should fix most failing tests...
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12683244/YARN-2637.7.patch
          against trunk revision a4df9ee.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 11 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.TestResourceManager
          org.apache.hadoop.yarn.server.resourcemanager.reservation.TestCapacitySchedulerPlanFollower

          The following test timeouts occurred in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.TestClientRMService
          org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
          org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/5912//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5912//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12683244/YARN-2637.7.patch against trunk revision a4df9ee. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 11 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.TestResourceManager org.apache.hadoop.yarn.server.resourcemanager.reservation.TestCapacitySchedulerPlanFollower The following test timeouts occurred in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.TestClientRMService org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/5912//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5912//console This message is automatically generated.
          Hide
          cwelch Craig Welch added a comment -

          This patch seems to pass all the existing unit tests on my box, verifing. Still todo, unit test for change as such, remove some extra logging.

          Show
          cwelch Craig Welch added a comment - This patch seems to pass all the existing unit tests on my box, verifing. Still todo, unit test for change as such, remove some extra logging.
          Hide
          hadoopqa Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12683370/YARN-2637.9.patch
          against trunk revision 555fa2d.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 13 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/5917//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5917//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12683370/YARN-2637.9.patch against trunk revision 555fa2d. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 13 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/5917//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5917//console This message is automatically generated.
          Hide
          cwelch Craig Welch added a comment -

          Added test specific to changed behavior, all existing tests should still pass, this patch should be ready for review.

          Show
          cwelch Craig Welch added a comment - Added test specific to changed behavior, all existing tests should still pass, this patch should be ready for review.
          Hide
          cwelch Craig Welch added a comment -

          One open question still in my mind is whether or not the configuration parameter should be changed to actually behave as a "percent". Other things so named (userlimit, at least) are actually a percentage - and the name of this parameter tends to suggest that - but it is actually just a float value (so you would use .1 to limit to 10 percent of cluster resource, not 10...). I did take a pass at making the change, it looks doable (with quite a few more test changes...). On the one hand, it seems like the time to make this change, as the meaning of the value is changing considerably as it is. On the other hand, it may be more impact than we want - as users who have configured, say, .3, will still have about the same behavior on a sizable cluster as they do today with the change as it is now, but if we modify it to actually behave as a "percent" value (e.g. / 100), then it will have a far more limiting impact (if the users do not adjust their configuration). Thoughts? Myself, I can see arguments both ways, though I'm leaning toward making the change to remove all of the "surprise" factor of how this parameter works... (e.g. make it a proper % value)

          Show
          cwelch Craig Welch added a comment - One open question still in my mind is whether or not the configuration parameter should be changed to actually behave as a "percent". Other things so named (userlimit, at least) are actually a percentage - and the name of this parameter tends to suggest that - but it is actually just a float value (so you would use .1 to limit to 10 percent of cluster resource, not 10...). I did take a pass at making the change, it looks doable (with quite a few more test changes...). On the one hand, it seems like the time to make this change, as the meaning of the value is changing considerably as it is. On the other hand, it may be more impact than we want - as users who have configured, say, .3, will still have about the same behavior on a sizable cluster as they do today with the change as it is now, but if we modify it to actually behave as a "percent" value (e.g. / 100), then it will have a far more limiting impact (if the users do not adjust their configuration). Thoughts? Myself, I can see arguments both ways, though I'm leaning toward making the change to remove all of the "surprise" factor of how this parameter works... (e.g. make it a proper % value)
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12683474/YARN-2637.12.patch
          against trunk revision 61a2510.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 13 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched
          org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation
          org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterService

          The following test timeouts occurred in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/5928//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5928//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12683474/YARN-2637.12.patch against trunk revision 61a2510. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 13 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterService The following test timeouts occurred in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/5928//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5928//console This message is automatically generated.
          Hide
          hitesh Hitesh Shah added a comment -

          One open question still in my mind is whether or not the configuration parameter should be changed to actually behave as a "percent".

          Doing this would be an incompatible change for 2.x which is not allowed. Another option might be to just deprecate the percent property and add a new config property called *.fraction or something similar if the 0 < x < 1 value represented by *.percent is confusing.

          Show
          hitesh Hitesh Shah added a comment - One open question still in my mind is whether or not the configuration parameter should be changed to actually behave as a "percent". Doing this would be an incompatible change for 2.x which is not allowed. Another option might be to just deprecate the percent property and add a new config property called *.fraction or something similar if the 0 < x < 1 value represented by *.percent is confusing.
          Hide
          hitesh Hitesh Shah added a comment -

          Sorry - ignore my previous comment. Will need to go look at the patch in more detail. It seems like the fix for this is by introducing a new property which could be one route to go down.

          Show
          hitesh Hitesh Shah added a comment - Sorry - ignore my previous comment. Will need to go look at the patch in more detail. It seems like the fix for this is by introducing a new property which could be one route to go down.
          Hide
          cwelch Craig Welch added a comment -

          Unexpected break of some tests on last change, fixed.

          Show
          cwelch Craig Welch added a comment - Unexpected break of some tests on last change, fixed.
          Hide
          cwelch Craig Welch added a comment -

          I did put together a patch which changes the value to function as a proper "percentage" value, but I'll hold off on it for the moment due to the compatibility concern.

          Show
          cwelch Craig Welch added a comment - I did put together a patch which changes the value to function as a proper "percentage" value, but I'll hold off on it for the moment due to the compatibility concern.
          Hide
          hadoopqa Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12683610/YARN-2637.13.patch
          against trunk revision 61a2510.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 13 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/5932//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5932//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12683610/YARN-2637.13.patch against trunk revision 61a2510. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 13 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/5932//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5932//console This message is automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12683610/YARN-2637.13.patch
          against trunk revision b36f292.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 13 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/5974//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5974//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12683610/YARN-2637.13.patch against trunk revision b36f292. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 13 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/5974//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5974//console This message is automatically generated.
          Hide
          djp Junping Du added a comment -

          Hi Craig Welch, sorry for coming late and thank you for updating the patch. A couple of comments:

              Resource amLimit = 
                Resources.multiply( 
                    lastClusterResource, 
                    maxAMResourcePerQueuePercent);
          

          Looks like maxAMResourcePerQueuePercent is a allowed percent for AM resource in each queue. So we may should calculate amLimit per queue rather than aggregate all applications together.

          +  private int maxActiveApplicationsForQueue = -1; // To allow manualy setting
          

          typo for manualy

          +  protected final Resource usedAMResources;
          

          usedAMResources is not used by sub-class, so suggest to replace it with private

          +      if (application.getAMResource() == null) throw new RuntimeException("c1");
          +      if (usedAMResources == null) throw new RuntimeException("c2");
          

          Exception messages here should be more meaningful than "c1", or "c2".

          +      if (!Resources.fitsIn(amIfStarted, amLimit)) {
          +        LOG.debug("not starting application as amIfStarted exceeds amLimit");
          +        continue;  
          +      }
          

          The log level here should be info or warn level rather than debug level. In most cases, LOG.debug() should be under block of LOG.isDebugEnabled().

          More comments may come later.

          Show
          djp Junping Du added a comment - Hi Craig Welch , sorry for coming late and thank you for updating the patch. A couple of comments: Resource amLimit = Resources.multiply( lastClusterResource, maxAMResourcePerQueuePercent); Looks like maxAMResourcePerQueuePercent is a allowed percent for AM resource in each queue. So we may should calculate amLimit per queue rather than aggregate all applications together. + private int maxActiveApplicationsForQueue = -1; // To allow manualy setting typo for manualy + protected final Resource usedAMResources; usedAMResources is not used by sub-class, so suggest to replace it with private + if (application.getAMResource() == null ) throw new RuntimeException( "c1" ); + if (usedAMResources == null ) throw new RuntimeException( "c2" ); Exception messages here should be more meaningful than "c1", or "c2". + if (!Resources.fitsIn(amIfStarted, amLimit)) { + LOG.debug( "not starting application as amIfStarted exceeds amLimit" ); + continue ; + } The log level here should be info or warn level rather than debug level. In most cases, LOG.debug() should be under block of LOG.isDebugEnabled(). More comments may come later.
          Hide
          cwelch Craig Welch added a comment -

          First, the easy parts

          typo for manualy

          fixed

          usedAMResources is not used by sub-class, so suggest to replace it with private

          done

          Exception messages here should be more meaningful than "c1", or "c2".

          yup - fixed

          The log level here should be info or warn level rather than debug level. In most cases, LOG.debug() should be under block of LOG.isDebugEnabled().

          So, I had made this debug rather than something higher because I'm not sure we always care, and it doesn't represent a failure case - this is normal/expected case, and other similar cases for not starting the app don't log at all. But, I can see that it will be helpful to know this, and I don't think that it will result in excessive logging - so I went ahead and made it an "info" level, sound good? BTW, the "isXYZenabled" idiom is to save the cost of evaluating the argument construction for the log message as these can be very expensive, but for cheap cases like this (a string literal) it's not necessary as the only cost is going to be the same evaluation for logging which will happen during the call

          Now for the more complicated one:

          Looks like maxAMResourcePerQueuePercent is a allowed percent for AM resource in each queue. So we may should calculate amLimit per queue rather than aggregate all applications together.

          So, yes and no - the current behavior actually takes the maxAM... which is set globally and it apportions it out based on the queue's baseline share of the cluster - so if the maxam was say, 10%, and a given queue had 50% of the cluster, it would have an effective maxampercent value of 5% (it's translated into "how many apps can I have running" based on the minallocation of the cluster rather than actual am usage - which is the problem which prompted the fix - but the important thing to get here is the way the overall maxampercent is apportioned out to the queues) There is also the option to override on a per queue basis, so that, in the above scenario, if you didn't like the queue getting the 5% based on the overall process, but you were happy with how other queues were working using the config, you could just override for the given queue.

          When I tried to translate this into something which was actually paying attention to the real usage of the ams, two approaches seemed reasonable:

          1. Just have a global used am resource value, use the global am percent everywhere (not apportioned) - this way the total cluster level effect is what we want - in this case, the subdivision of the amresource percent value is replaced with a total summing of the used resource amongst the queues. You can still override for a given queue if you want "this queue to be able to go higher", which has the effective result of allowing one queue to go higher than the others, this could starve other queues (bad) but that was already possible with the other approach, albeit in a different way (when the cluster came to be filled with AM's from one particular queue.).

          2. We could subdivide the global maxampercent based on the queue share of the baseline (as before) and then have a per-queue amresource percent (and amused) which are evaluated - this would not be a difficult change from the current approach, but I think it is problematic for the reason below

          The main reason I took approach number one over two is that I was concerned that with a complex queue structure where there was a reasonable level of subdivision in a smallish cluster you could end up with a queue which can effectively never start anything because the final value is too small to ever be able to start one of the larger AM's we have these days. By sharing it globally this is less likely to happen because that "unused am resource" allocated out to other queues which have a larger share of the cluster is not potentially sitting idle while "leaf queue a.b.c" has a derived maxampercent of say 2%, which translates into 512mb, and so can never start an application master which needs 1G (even though, globally, there's more than enough ampercent to do so). It's the "this queue can never start an am over x size" that concerns me. There are other possible ways to handle this with option 2, but I'm concerned that they would add complexity to the behavior and change the behavior more than is needed to correct the defect.

          Junping Du Make sense? Thoughts? I may take a go at option 2 so we can evaluate it, but I'm concerned about the small cluster/too much subdivision scenario being problematic.

          Show
          cwelch Craig Welch added a comment - First, the easy parts typo for manualy fixed usedAMResources is not used by sub-class, so suggest to replace it with private done Exception messages here should be more meaningful than "c1", or "c2". yup - fixed The log level here should be info or warn level rather than debug level. In most cases, LOG.debug() should be under block of LOG.isDebugEnabled(). So, I had made this debug rather than something higher because I'm not sure we always care, and it doesn't represent a failure case - this is normal/expected case, and other similar cases for not starting the app don't log at all. But, I can see that it will be helpful to know this, and I don't think that it will result in excessive logging - so I went ahead and made it an "info" level, sound good? BTW, the "isXYZenabled" idiom is to save the cost of evaluating the argument construction for the log message as these can be very expensive, but for cheap cases like this (a string literal) it's not necessary as the only cost is going to be the same evaluation for logging which will happen during the call Now for the more complicated one: Looks like maxAMResourcePerQueuePercent is a allowed percent for AM resource in each queue. So we may should calculate amLimit per queue rather than aggregate all applications together. So, yes and no - the current behavior actually takes the maxAM... which is set globally and it apportions it out based on the queue's baseline share of the cluster - so if the maxam was say, 10%, and a given queue had 50% of the cluster, it would have an effective maxampercent value of 5% (it's translated into "how many apps can I have running" based on the minallocation of the cluster rather than actual am usage - which is the problem which prompted the fix - but the important thing to get here is the way the overall maxampercent is apportioned out to the queues) There is also the option to override on a per queue basis, so that, in the above scenario, if you didn't like the queue getting the 5% based on the overall process, but you were happy with how other queues were working using the config, you could just override for the given queue. When I tried to translate this into something which was actually paying attention to the real usage of the ams, two approaches seemed reasonable: 1. Just have a global used am resource value, use the global am percent everywhere (not apportioned) - this way the total cluster level effect is what we want - in this case, the subdivision of the amresource percent value is replaced with a total summing of the used resource amongst the queues. You can still override for a given queue if you want "this queue to be able to go higher", which has the effective result of allowing one queue to go higher than the others, this could starve other queues (bad) but that was already possible with the other approach, albeit in a different way (when the cluster came to be filled with AM's from one particular queue.). 2. We could subdivide the global maxampercent based on the queue share of the baseline (as before) and then have a per-queue amresource percent (and amused) which are evaluated - this would not be a difficult change from the current approach, but I think it is problematic for the reason below The main reason I took approach number one over two is that I was concerned that with a complex queue structure where there was a reasonable level of subdivision in a smallish cluster you could end up with a queue which can effectively never start anything because the final value is too small to ever be able to start one of the larger AM's we have these days. By sharing it globally this is less likely to happen because that "unused am resource" allocated out to other queues which have a larger share of the cluster is not potentially sitting idle while "leaf queue a.b.c" has a derived maxampercent of say 2%, which translates into 512mb, and so can never start an application master which needs 1G (even though, globally, there's more than enough ampercent to do so). It's the "this queue can never start an am over x size" that concerns me. There are other possible ways to handle this with option 2, but I'm concerned that they would add complexity to the behavior and change the behavior more than is needed to correct the defect. Junping Du Make sense? Thoughts? I may take a go at option 2 so we can evaluate it, but I'm concerned about the small cluster/too much subdivision scenario being problematic.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Craig Welch,
          I think option#2 makes more sense to me, since each allocation will check queue's capcity limit only. IIUC. option #1 could lead to some queues all are occupied by AM, which is why we introduced the max-am-resource parameter.

          For option#2, we can allow user run at least one AM in spite of max am resource to avoid the problem mentioned. In a real world cluster, capacity of queue should be >> maximum size of container we can launch. Do you agree?

          Thanks,
          Wangda

          Show
          leftnoteasy Wangda Tan added a comment - Craig Welch , I think option#2 makes more sense to me, since each allocation will check queue's capcity limit only. IIUC. option #1 could lead to some queues all are occupied by AM, which is why we introduced the max-am-resource parameter. For option#2, we can allow user run at least one AM in spite of max am resource to avoid the problem mentioned. In a real world cluster, capacity of queue should be >> maximum size of container we can launch. Do you agree? Thanks, Wangda
          Hide
          cwelch Craig Welch added a comment -


          Hmmm, Wangda Tan option 1 does have the possible issue you describe, and the issue with possibly starving all other queues if one queue has the am percent set higher than the others I mentioned above. The approach of only enforcing the limit if at least one application is running was the approach I was thinking of if we went with 2 - the other being to not add the new app in when doing the check (so it's only retroactive to what has started), but I like the former better as it will reduce the overage as much as possible. Obviously, either approach has the potential to allow things to exceed the maxampercent if there are a large number of queues, but there are tradeoffs either way, it's probably a smaller risk... I'll see about a patch for approach 2.

          Show
          cwelch Craig Welch added a comment - Hmmm, Wangda Tan option 1 does have the possible issue you describe, and the issue with possibly starving all other queues if one queue has the am percent set higher than the others I mentioned above. The approach of only enforcing the limit if at least one application is running was the approach I was thinking of if we went with 2 - the other being to not add the new app in when doing the check (so it's only retroactive to what has started), but I like the former better as it will reduce the overage as much as possible. Obviously, either approach has the potential to allow things to exceed the maxampercent if there are a large number of queues, but there are tradeoffs either way, it's probably a smaller risk... I'll see about a patch for approach 2.
          Hide
          djp Junping Du added a comment -

          Thanks Craig Welch for replying my comments and Wangda Tan for your feedback.

          option 1 does have the possible issue you describe, and the issue with possibly starving all other queues if one queue has the am percent set higher than the others I mentioned above.

          Agree. Option 1 could be leveraged by malicious behaviors in a multi-tenant scenario, i.e. one can ask more AM resources to block AMs in other queues. Option 2 sounds reasonable and I agree that we should make sure at least 1 AM get launched and warn this case (percentage is set too low). Thoughts?

          Show
          djp Junping Du added a comment - Thanks Craig Welch for replying my comments and Wangda Tan for your feedback. option 1 does have the possible issue you describe, and the issue with possibly starving all other queues if one queue has the am percent set higher than the others I mentioned above. Agree. Option 1 could be leveraged by malicious behaviors in a multi-tenant scenario, i.e. one can ask more AM resources to block AMs in other queues. Option 2 sounds reasonable and I agree that we should make sure at least 1 AM get launched and warn this case (percentage is set too low). Thoughts?
          Hide
          cwelch Craig Welch added a comment -

          Attached .15 patch which implements option 2

          Show
          cwelch Craig Welch added a comment - Attached .15 patch which implements option 2
          Hide
          cwelch Craig Welch added a comment -

          .16 the same as .15 but with additional verification of the effect of the maxcapacity on the maxamresource value

          Show
          cwelch Craig Welch added a comment - .16 the same as .15 but with additional verification of the effect of the maxcapacity on the maxamresource value
          Hide
          cwelch Craig Welch added a comment -

          Something to be aware of wrt the way I implemented option 2- when making the change I noticed that, in fact, the share of the maxampercent apportioned out to queues was actually based on the maximum capacity of the queue (absMaxCapacity) not the baseline value (absCapacity). I went ahead and kept this logic because it is consistent with the earlier approach and it still provides the desired control although potentially allowing more resources being provided to am's in cluster-aggregate if queues have high max values. Meaning if the total max is say 200% (sum of queue max, as opposed to the 100% baseline), then the standard .1 maxam will actually allow 20% usage of resources for application masters in the cluster. You can still manage the usage effectively, though, so I thought it best to stay as consistent with existing meaning as possible (while fixing the surprising aspect of it...), just wanted to make sure you were aware/it was documented

          Show
          cwelch Craig Welch added a comment - Something to be aware of wrt the way I implemented option 2- when making the change I noticed that, in fact, the share of the maxampercent apportioned out to queues was actually based on the maximum capacity of the queue (absMaxCapacity) not the baseline value (absCapacity). I went ahead and kept this logic because it is consistent with the earlier approach and it still provides the desired control although potentially allowing more resources being provided to am's in cluster-aggregate if queues have high max values. Meaning if the total max is say 200% (sum of queue max, as opposed to the 100% baseline), then the standard .1 maxam will actually allow 20% usage of resources for application masters in the cluster. You can still manage the usage effectively, though, so I thought it best to stay as consistent with existing meaning as possible (while fixing the surprising aspect of it...), just wanted to make sure you were aware/it was documented
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12685463/YARN-2637.15.patch
          against trunk revision 475c6b4.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 7 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6015//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6015//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12685463/YARN-2637.15.patch against trunk revision 475c6b4. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 7 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6015//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6015//console This message is automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12685475/YARN-2637.16.patch
          against trunk revision 475c6b4.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 8 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6019//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6019//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12685475/YARN-2637.16.patch against trunk revision 475c6b4. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6019//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6019//console This message is automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12685475/YARN-2637.16.patch
          against trunk revision 120e1de.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 8 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerQueueACLs
          org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueParsing
          org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerDynamicBehavior
          org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage

          The following test timeouts occurred in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler
          org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueMappings

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6030//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6030//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12685475/YARN-2637.16.patch against trunk revision 120e1de. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerQueueACLs org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueParsing org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerDynamicBehavior org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage The following test timeouts occurred in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueMappings +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6030//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6030//console This message is automatically generated.
          Hide
          djp Junping Du added a comment -

          I manually kick off Jenkins test again for latest patch.

          Show
          djp Junping Du added a comment - I manually kick off Jenkins test again for latest patch.
          Hide
          hadoopqa Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12685475/YARN-2637.16.patch
          against trunk revision 144da2e.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 8 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6035//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6035//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12685475/YARN-2637.16.patch against trunk revision 144da2e. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6035//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6035//console This message is automatically generated.
          Hide
          djp Junping Du added a comment -

          Thanks Craig Welch for updating the patch! The patch looks good in overall, some minor comments:

          +  public int getMaximumActiveApplicationsForQueue(String queue) {
          +    int maxActiveApplicationsForQueue = 
          +      getInt(getQueuePrefix(queue) + MAXIMUM_ACTIVE_APPLICATIONS_SUFFIX, 
          +        getInt(DEFAULT_MAXIMUM_ACTIVE_QUEUE_APPLICATIONS, -1));
          +    return maxActiveApplicationsForQueue;
          +  }
          

          If my understanding is correct, here we are trying to get a per queue value first, if not set, we are trying to get a value that is common for each queue, and get -1 at last which means we will try to calculate this value later.
          Do we set any default value for DEFAULT_MAXIMUM_ACTIVE_QUEUE_APPLICATIONS somewhere (in code or capacity-scheduler.xml)? I think the answer is no, so we better to document it somewhere so that user can understand what to do.

          +    if (maxActiveApplicationsForQueue != -1) {
          +      //is manually configured
          +      maxActiveApplications = maxActiveApplicationsForQueue;
          +    } else {
          +      maxActiveApplications =
          

          Do we need to check the value valid for maxActiveApplicationsForQueue? If user set some another minus value, better to throw some warning message here.

          +	  
          +	  //Verify the value for getAMResourceLimit for queues with < .1 maxcap
          +	  Resource clusterResource = Resource.newInstance(50 * GB, 50);
          +	  
          +	  a.updateClusterResource(clusterResource);
          +    assertEquals(Resources.multiply(clusterResource, 
          +      a.getAbsoluteMaximumCapacity() * a.getMaxAMResourcePerQueuePercent()), 
          +      a.getAMResourceLimit());
          +	  
          +	  b.updateClusterResource(clusterResource);
          +    assertEquals(Resources.multiply(clusterResource, 
          +      b.getAbsoluteMaximumCapacity() * b.getMaxAMResourcePerQueuePercent()), 
          +      b.getAMResourceLimit());
          

          The format should be adjusted and TAB space should be replaced with white space.

          Other looks fine to me. Wangda Tan, do you have additional comments?

          Show
          djp Junping Du added a comment - Thanks Craig Welch for updating the patch! The patch looks good in overall, some minor comments: + public int getMaximumActiveApplicationsForQueue( String queue) { + int maxActiveApplicationsForQueue = + getInt(getQueuePrefix(queue) + MAXIMUM_ACTIVE_APPLICATIONS_SUFFIX, + getInt(DEFAULT_MAXIMUM_ACTIVE_QUEUE_APPLICATIONS, -1)); + return maxActiveApplicationsForQueue; + } If my understanding is correct, here we are trying to get a per queue value first, if not set, we are trying to get a value that is common for each queue, and get -1 at last which means we will try to calculate this value later. Do we set any default value for DEFAULT_MAXIMUM_ACTIVE_QUEUE_APPLICATIONS somewhere (in code or capacity-scheduler.xml)? I think the answer is no, so we better to document it somewhere so that user can understand what to do. + if (maxActiveApplicationsForQueue != -1) { + //is manually configured + maxActiveApplications = maxActiveApplicationsForQueue; + } else { + maxActiveApplications = Do we need to check the value valid for maxActiveApplicationsForQueue? If user set some another minus value, better to throw some warning message here. + + //Verify the value for getAMResourceLimit for queues with < .1 maxcap + Resource clusterResource = Resource.newInstance(50 * GB, 50); + + a.updateClusterResource(clusterResource); + assertEquals(Resources.multiply(clusterResource, + a.getAbsoluteMaximumCapacity() * a.getMaxAMResourcePerQueuePercent()), + a.getAMResourceLimit()); + + b.updateClusterResource(clusterResource); + assertEquals(Resources.multiply(clusterResource, + b.getAbsoluteMaximumCapacity() * b.getMaxAMResourcePerQueuePercent()), + b.getAMResourceLimit()); The format should be adjusted and TAB space should be replaced with white space. Other looks fine to me. Wangda Tan , do you have additional comments?
          Hide
          cwelch Craig Welch added a comment -

          Made Junping Du 's recommended changes, thanks for taking a look. Wangda Tan, any other comments?

          Show
          cwelch Craig Welch added a comment - Made Junping Du 's recommended changes, thanks for taking a look. Wangda Tan , any other comments?
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12686414/YARN-2637.17.patch
          against trunk revision 92916ae.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 8 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 15 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesFairScheduler
          org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6079//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/6079//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6079//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12686414/YARN-2637.17.patch against trunk revision 92916ae. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 15 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesFairScheduler org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6079//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/6079//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6079//console This message is automatically generated.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Craig Welch, I will take a look at this patch today as well.

          Thanks,

          Show
          leftnoteasy Wangda Tan added a comment - Craig Welch , I will take a look at this patch today as well. Thanks,
          Hide
          cwelch Craig Welch added a comment -

          I double checked - none of the findbugs warnings are related to my change and the tests actually pass on my box with the change - and are unrelated in any case, as far as I can see. There's plenty of chatter on other jira's that this is related to the jdk/findbugs update... so, I believe these can be ignored.

          Show
          cwelch Craig Welch added a comment - I double checked - none of the findbugs warnings are related to my change and the tests actually pass on my box with the change - and are unrelated in any case, as far as I can see. There's plenty of chatter on other jira's that this is related to the jdk/findbugs update... so, I believe these can be ignored.
          Hide
          djp Junping Du added a comment -

          Ok. Thanks for double-check it. I will wait Wangda Tan for more review comments and may commit it tomorrow if no future comments.

          Show
          djp Junping Du added a comment - Ok. Thanks for double-check it. I will wait Wangda Tan for more review comments and may commit it tomorrow if no future comments.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Craig,
          First is a major comment, I'm not quite sure if it was discussed:
          I feel MAXIMUM_ACTIVE_APPLICATIONS_SUFFIX and DEFAULT_MAXIMUM_ACTIVE_QUEUE_APPLICATIONS makes configuration becoming complex. Since we already have MAXIMUM_AM_RESOURCE_SUFFIX, it should be enough to define how much resource could be used for AMs in a queue. And other fields of LeafQueue like maxActiveApplicationsPerUser doesn't have such a manual configuration as well. It is possible that maxActiveApplicationsPerUser and maxActiveApplications (which set manually) not match.

          Is there any actual requirements to add the two manual parameter? I prefer to drop the two options to keep the patch simple if there's no actual requirements.

          Some minor comments:
          1. The two checks may not be necessary, they will never be null:

          +      if (application.getAMResource() == null) {
          +        throw new RuntimeException("Application getAMResource returned 'null'");
          +      }
          +      if (usedAMResources == null) {
          +        throw new RuntimeException("Queue's usedAMResources is 'null'"); 
          +      }
          

          2. FiCaSchedulerApp constructor
          Similar to above, when amRequest will be null? And also when amRequest.getCapability() will be null?
          This seems dangerous to me:

          +    if (amResource == null) {
          +      amResource = Resource.newInstance(0, 0);
          +    }
          

          You should throw exception when amResource == null is illegal, a valid case I can think about is unmanaged AM, could you check that?

          3. MockRM:
          Why this is needed? Is there any issue of original default value?

          +  protected void setTestConfigs(Configuration conf) {
          +    conf.set(
          +    CapacitySchedulerConfiguration.MAXIMUM_APPLICATION_MASTERS_RESOURCE_PERCENT,
          +    "1.0");
          +  }
          

          And also similar change in TestResourceManager/TestCapacityScheduler

          4. TestApplicationLimits
          Can you add a test for accumulated AM resource checking? Like app1.am + app2.am < limit but app1.am + app2.am + app3.am > limit. Since you have such logic in LeafQueue.

          Thanks,
          Wangda

          Show
          leftnoteasy Wangda Tan added a comment - Hi Craig, First is a major comment, I'm not quite sure if it was discussed: I feel MAXIMUM_ACTIVE_APPLICATIONS_SUFFIX and DEFAULT_MAXIMUM_ACTIVE_QUEUE_APPLICATIONS makes configuration becoming complex. Since we already have MAXIMUM_AM_RESOURCE_SUFFIX, it should be enough to define how much resource could be used for AMs in a queue. And other fields of LeafQueue like maxActiveApplicationsPerUser doesn't have such a manual configuration as well. It is possible that maxActiveApplicationsPerUser and maxActiveApplications (which set manually) not match. Is there any actual requirements to add the two manual parameter? I prefer to drop the two options to keep the patch simple if there's no actual requirements. Some minor comments: 1. The two checks may not be necessary, they will never be null: + if (application.getAMResource() == null ) { + throw new RuntimeException( "Application getAMResource returned ' null '" ); + } + if (usedAMResources == null ) { + throw new RuntimeException( "Queue's usedAMResources is ' null '" ); + } 2. FiCaSchedulerApp constructor Similar to above, when amRequest will be null? And also when amRequest.getCapability() will be null? This seems dangerous to me: + if (amResource == null ) { + amResource = Resource.newInstance(0, 0); + } You should throw exception when amResource == null is illegal, a valid case I can think about is unmanaged AM, could you check that? 3. MockRM: Why this is needed? Is there any issue of original default value? + protected void setTestConfigs(Configuration conf) { + conf.set( + CapacitySchedulerConfiguration.MAXIMUM_APPLICATION_MASTERS_RESOURCE_PERCENT, + "1.0" ); + } And also similar change in TestResourceManager/TestCapacityScheduler 4. TestApplicationLimits Can you add a test for accumulated AM resource checking? Like app1.am + app2.am < limit but app1.am + app2.am + app3.am > limit. Since you have such logic in LeafQueue. Thanks, Wangda
          Hide
          cwelch Craig Welch added a comment -

          Update with some changes based on Wangda Tan 's comments.

          Show
          cwelch Craig Welch added a comment - Update with some changes based on Wangda Tan 's comments.
          Hide
          cwelch Craig Welch added a comment -

          Wangda Tan updated patch with some changes based on your comments - details below:

          (first the lesser comments):

          1. The two checks may not be necessary, they will never be null

          So, yes and no. When running "for real", no, never will be. We have a multitude of mocking cases for tests, however, and at times they were. I put these in to make the exceptions easier to understand in those cases. As I had (previously) tracked them all down, I'll go ahead and remove these as you suggest, though I have mixed feelings about that, due to the fact that they may cause confusion for a developer down the road...

          2. FiCaSchedulerApp constructor

          So, I have left this in - there are a plethora of different places/ways in which these are being mocked in tests, and without this it's necessary to make a great many rather intricate changes to the test mocking. If no value is provided when running in the "real" case during submission this is the default anyway, it did not seem dangerous to propagate that here for the test cases which do not travel that path and therefore are subject to npe's

          MockRM: Why this is needed? Is there any issue of original default value?

          Tests were depending on the previous, incorrect behavior to run - the actual size of the AM's vs the cluster size was such that many tests will fail as their applications will not start (they are, effectively, "very small clusters"). We have targeted tests specific to the max am logic where this is not in play - for other cases I want to make sure it is "out of the way" so that they can concentrate on what they are testing, hence the change in value.

          TestApplicationLimits - Can you add a test for accumulated AM resource checking? Like app1.am + app2.am < limit but app1.am + app2.am + app3.am > limit.

          Yes, I think that's a good test to add - done

          -re maximumActiveApplications - this is a good question. Before this change it was possible to effectively set this value by just doing a bit of math because the "pretend" AM size was a fixed value. Now that the real AM size is being used instead, and it can vary, it's no longer possible to effectively set a "maxActiveApplicaitons" using the amresourcelimit. When interacting with some folks who were doing system testing, and while going through the unit tests, I found that people were, in some cases, expecting to be able to do that/ depending on it. We also had some unit tests which had the same expectation. Based on these existing cases I was concerned that, without this, we would be taking away a feature that I know is being used. I think of it is a "backward compatibility" requirement and I do think we need it / it has practical value. I've not seen maxActiveApplications per user being used in this way, and it would be more difficult to do that anyway, so I did not add that ability (I'm of the same opinion that it's better not to add something where there is not a clear need for it.)

          Show
          cwelch Craig Welch added a comment - Wangda Tan updated patch with some changes based on your comments - details below: (first the lesser comments): 1. The two checks may not be necessary, they will never be null So, yes and no. When running "for real", no, never will be. We have a multitude of mocking cases for tests, however, and at times they were. I put these in to make the exceptions easier to understand in those cases. As I had (previously) tracked them all down, I'll go ahead and remove these as you suggest, though I have mixed feelings about that, due to the fact that they may cause confusion for a developer down the road... 2. FiCaSchedulerApp constructor So, I have left this in - there are a plethora of different places/ways in which these are being mocked in tests, and without this it's necessary to make a great many rather intricate changes to the test mocking. If no value is provided when running in the "real" case during submission this is the default anyway, it did not seem dangerous to propagate that here for the test cases which do not travel that path and therefore are subject to npe's MockRM: Why this is needed? Is there any issue of original default value? Tests were depending on the previous, incorrect behavior to run - the actual size of the AM's vs the cluster size was such that many tests will fail as their applications will not start (they are, effectively, "very small clusters"). We have targeted tests specific to the max am logic where this is not in play - for other cases I want to make sure it is "out of the way" so that they can concentrate on what they are testing, hence the change in value. TestApplicationLimits - Can you add a test for accumulated AM resource checking? Like app1.am + app2.am < limit but app1.am + app2.am + app3.am > limit. Yes, I think that's a good test to add - done -re maximumActiveApplications - this is a good question. Before this change it was possible to effectively set this value by just doing a bit of math because the "pretend" AM size was a fixed value. Now that the real AM size is being used instead, and it can vary, it's no longer possible to effectively set a "maxActiveApplicaitons" using the amresourcelimit. When interacting with some folks who were doing system testing, and while going through the unit tests, I found that people were, in some cases, expecting to be able to do that/ depending on it. We also had some unit tests which had the same expectation. Based on these existing cases I was concerned that, without this, we would be taking away a feature that I know is being used. I think of it is a "backward compatibility" requirement and I do think we need it / it has practical value. I've not seen maxActiveApplications per user being used in this way, and it would be more difficult to do that anyway, so I did not add that ability (I'm of the same opinion that it's better not to add something where there is not a clear need for it.)
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12687880/YARN-2637.18.patch
          against trunk revision a1bd140.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 8 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 14 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6137//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/6137//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6137//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12687880/YARN-2637.18.patch against trunk revision a1bd140. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 14 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6137//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/6137//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6137//console This message is automatically generated.
          Hide
          cwelch Craig Welch added a comment -

          Modified patch to use minimum allocation value if application master resource is unavailable

          Show
          cwelch Craig Welch added a comment - Modified patch to use minimum allocation value if application master resource is unavailable
          Hide
          cwelch Craig Welch added a comment -

          Minor update - TestAllocationFileLoaderService passes for me, I think it's a build server issue. Also, I believe the findbugs is still related to the jdk7 update - but they had disappeared before I had a chance to verify, they will re-run with the new patch and I will confirm that they are not related to my change. In all, I believe the change is fine in terms of the release audit checks...

          The only change in this version vs the last is with respect to:

          2. FiCaSchedulerApp constructor

          As I said before, this is present in non-test test scenarios. However, I realized that I could use the minimum allocation from the scheduler if it is not present, which would mean at worse we would have the "old behavior" if there is not an actual amresource to work with - so I adjusted the code to do that if necessary & possible

          Show
          cwelch Craig Welch added a comment - Minor update - TestAllocationFileLoaderService passes for me, I think it's a build server issue. Also, I believe the findbugs is still related to the jdk7 update - but they had disappeared before I had a chance to verify, they will re-run with the new patch and I will confirm that they are not related to my change. In all, I believe the change is fine in terms of the release audit checks... The only change in this version vs the last is with respect to: 2. FiCaSchedulerApp constructor As I said before, this is present in non-test test scenarios. However, I realized that I could use the minimum allocation from the scheduler if it is not present, which would mean at worse we would have the "old behavior" if there is not an actual amresource to work with - so I adjusted the code to do that if necessary & possible
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12688189/YARN-2637.19.patch
          against trunk revision 0402bad.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 8 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 14 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6154//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/6154//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6154//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12688189/YARN-2637.19.patch against trunk revision 0402bad. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 14 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6154//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/6154//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6154//console This message is automatically generated.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Craig Welch,

          Regarding "FiCaSchedulerApp constructor: Tests were depending on the previous, incorrect behavior to run ...",

          I think that is fine if we have to change a lot of tests to avod such null checking.

          Tests were depending on the previous, incorrect behavior to run

          I think there should at least one AM can be launched in each queue, otherwise the queue totally makes no sense. Imaging a user can see a queue and cluster has available resource, but app in the queue is still pending. I think we can fix the issue together in this ticket, or file a separate JIRA, I prefer previous one.
          And in addition, we should see how many tests will fail instead of set value here. We should not add hard code such configuration in MockRM's constructor, a developer may write unit test like

          Configuration conf = new Configuration(..);
          conf.set(AM_RESOURCE, 0.123f);
          MockRM rm = new MockRM(conf);
          

          Because the AM_RESOURCE value will be overwritten in new MockRM's logic.

          -re maximumActiveApplications - this is a good question. Before this change it was possible to effectively set this value by just doing a bit of math because the "pretend" AM size was a fixed value.

          I just thought it is not correct, as you mentioned, we shouldn't have logic depend on this incorrect behavior, it is not a backward compatible change to me, it is just leave some necessary logic in implementation/configuration.
          If a user wants to specify #app in a queue just doing tests, set a proper AM_PERCENTAGE and also launches AM with fixed capacity will be a very easy way, do I under-estimate this problem?

          Thanks,
          Wangda

          Show
          leftnoteasy Wangda Tan added a comment - Hi Craig Welch , Regarding "FiCaSchedulerApp constructor: Tests were depending on the previous, incorrect behavior to run ...", I think that is fine if we have to change a lot of tests to avod such null checking. Tests were depending on the previous, incorrect behavior to run I think there should at least one AM can be launched in each queue, otherwise the queue totally makes no sense. Imaging a user can see a queue and cluster has available resource, but app in the queue is still pending. I think we can fix the issue together in this ticket, or file a separate JIRA, I prefer previous one. And in addition, we should see how many tests will fail instead of set value here. We should not add hard code such configuration in MockRM's constructor, a developer may write unit test like Configuration conf = new Configuration(..); conf.set(AM_RESOURCE, 0.123f); MockRM rm = new MockRM(conf); Because the AM_RESOURCE value will be overwritten in new MockRM's logic. -re maximumActiveApplications - this is a good question. Before this change it was possible to effectively set this value by just doing a bit of math because the "pretend" AM size was a fixed value. I just thought it is not correct, as you mentioned, we shouldn't have logic depend on this incorrect behavior, it is not a backward compatible change to me, it is just leave some necessary logic in implementation/configuration. If a user wants to specify #app in a queue just doing tests, set a proper AM_PERCENTAGE and also launches AM with fixed capacity will be a very easy way, do I under-estimate this problem? Thanks, Wangda
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12689408/YARN-2637.20.patch
          against trunk revision 249cc90.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 5 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.
          See https://builds.apache.org/job/PreCommit-YARN-Build/6211//artifact/patchprocess/diffJavadocWarnings.txt for details.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesFairScheduler
          org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps
          org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodes
          org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService
          org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodeLabels

          The following test timeouts occurred in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6211//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6211//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12689408/YARN-2637.20.patch against trunk revision 249cc90. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 5 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 javadoc . The javadoc tool appears to have generated 1 warning messages. See https://builds.apache.org/job/PreCommit-YARN-Build/6211//artifact/patchprocess/diffJavadocWarnings.txt for details. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesFairScheduler org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodes org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodeLabels The following test timeouts occurred in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6211//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6211//console This message is automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12689908/YARN-2637.21.patch
          against trunk revision 947578c.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 4 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCNodeUpdates

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6232//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6232//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12689908/YARN-2637.21.patch against trunk revision 947578c. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 4 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCNodeUpdates Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6232//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6232//console This message is automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12689915/YARN-2637.22.patch
          against trunk revision 947578c.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6234//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12689915/YARN-2637.22.patch against trunk revision 947578c. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6234//console This message is automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12689916/YARN-2637.23.patch
          against trunk revision 947578c.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 8 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.TestFifoScheduler
          org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler

          The following test timeouts occurred in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.TestClientRMService
          org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
          org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6235//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6235//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12689916/YARN-2637.23.patch against trunk revision 947578c. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.TestFifoScheduler org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler The following test timeouts occurred in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.TestClientRMService org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6235//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6235//console This message is automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12689921/YARN-2637.25.patch
          against trunk revision 947578c.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 5 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6236//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6236//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12689921/YARN-2637.25.patch against trunk revision 947578c. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 5 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6236//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6236//console This message is automatically generated.
          Hide
          cwelch Craig Welch added a comment -

          I think there should at least one AM can be launched in each queue ... MockRM test config settings

          That's been the case since switching to approach 2, some tests need to start > 1 app in a queue In any case, I've removed the MockRM test config settings, it's only needed in a few tests now, so I'm setting it those tests directly
          (done)

          -re maximumActiveApplications ... MAXIMUM_ACTIVE_APPLICATIONS_SUFFIX

          I removed this new configuration point. It is no longer possible to directly control how many apps start in a queue since the AM's are not all the same size, so it's not possible to actually control that now outside of testing (it was before, not it's not). However, the cases I recall using that were all to work around the fact that the max am percent wasn't working properly, so hopefully this won't be missed
          (done)

          -re null checks in FiCaSchedulerApp constructor
          So, the ResourceManager itself checks for null rmapps (ResourceManager.java~ line 830), this is a pre-existing case which is tolerated and I'm not going to address it. The getAMResourceRequest() can also be null for unmanaged AM's. I've reduced the null checks for the app to just these two cases but those checks should remain.
          (partly done/remaining should stay as-is)

          All the build quality checks and tests are passing, not sure why the overall is red, think it's a build server issue...

          Show
          cwelch Craig Welch added a comment - I think there should at least one AM can be launched in each queue ... MockRM test config settings That's been the case since switching to approach 2, some tests need to start > 1 app in a queue In any case, I've removed the MockRM test config settings, it's only needed in a few tests now, so I'm setting it those tests directly (done) -re maximumActiveApplications ... MAXIMUM_ACTIVE_APPLICATIONS_SUFFIX I removed this new configuration point. It is no longer possible to directly control how many apps start in a queue since the AM's are not all the same size, so it's not possible to actually control that now outside of testing (it was before, not it's not). However, the cases I recall using that were all to work around the fact that the max am percent wasn't working properly, so hopefully this won't be missed (done) -re null checks in FiCaSchedulerApp constructor So, the ResourceManager itself checks for null rmapps (ResourceManager.java~ line 830), this is a pre-existing case which is tolerated and I'm not going to address it. The getAMResourceRequest() can also be null for unmanaged AM's. I've reduced the null checks for the app to just these two cases but those checks should remain. (partly done/remaining should stay as-is) All the build quality checks and tests are passing, not sure why the overall is red, think it's a build server issue...
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Craig Welch,
          Thanks for updating, the latest patch looks much cleaner to me.

          Regarding null checks in FiCaSchedulerApp. Since scheduler assumes application is in running state when adding FiCaSchedulerApp. It is a big issue if RMApp cannot be found at that time. So comparing to just ignore such error, I think you need throw exception (if that exception will not cause RM shutdown) and log such error.

          And when this is possible?

          +      if (rmContext.getScheduler() != null) {
          +        amResource = rmContext.getScheduler().getMinimumResourceCapability();
          +      }
          

          If this is to address mock issue in tests, I suggest to modify test logic to avoid such changes.

          TestLeafQueue has some "\t". Could you fix that please?

          Wangda

          Show
          leftnoteasy Wangda Tan added a comment - Hi Craig Welch , Thanks for updating, the latest patch looks much cleaner to me. Regarding null checks in FiCaSchedulerApp. Since scheduler assumes application is in running state when adding FiCaSchedulerApp. It is a big issue if RMApp cannot be found at that time. So comparing to just ignore such error, I think you need throw exception (if that exception will not cause RM shutdown) and log such error. And when this is possible? + if (rmContext.getScheduler() != null ) { + amResource = rmContext.getScheduler().getMinimumResourceCapability(); + } If this is to address mock issue in tests, I suggest to modify test logic to avoid such changes. TestLeafQueue has some "\t". Could you fix that please? Wangda
          Hide
          cwelch Craig Welch added a comment -

          Regarding null checks in FiCaSchedulerApp. Since scheduler assumes application is in running state when adding FiCaSchedulerApp. It is a big issue if RMApp cannot be found at that time. So comparing to just ignore such error, I think you need throw exception (if that exception will not cause RM shutdown) and log such error.

          I'm not quite sure how to phrase this differently to get the point across - it is already the case throughout the many mocking points which interact with this code that the rmapp may be null at this point (if it were not the case it would not be necessary to check for it). As I mentioned previously, the ResourceManager itself checks for this case. I am not introducing the mocking which resulted in this state, or even existing checks for it in non-test code, I'm receiving this state and carrying it forward in the same way as it has been done elsewhere (and, again, not simply in tests). Changing this is not something which belongs in the scope of this jira because it represents a rationalization/overhaul of mocking throughout this area (resource manager, schedulers), it is non-trivial and not specific to or properly within the scope of this change. Feel free to create a separate jira to improve the mocking throughout the code. The separate null-check for the amresourcerequest is necessitated by the apparently intentional behavior of unmanaged am's.

          And when this is possible?

          + if (rmContext.getScheduler() != null)

          again, in existing test paths, and existing code is tolerant of this as well, I'm merely carrying it forward - it would belong in the new jira as well, were one opened

          \t in leafqueue - I've checked and the spacing is consistent with the existing spacing in the file.

          Show
          cwelch Craig Welch added a comment - Regarding null checks in FiCaSchedulerApp. Since scheduler assumes application is in running state when adding FiCaSchedulerApp. It is a big issue if RMApp cannot be found at that time. So comparing to just ignore such error, I think you need throw exception (if that exception will not cause RM shutdown) and log such error. I'm not quite sure how to phrase this differently to get the point across - it is already the case throughout the many mocking points which interact with this code that the rmapp may be null at this point (if it were not the case it would not be necessary to check for it). As I mentioned previously, the ResourceManager itself checks for this case. I am not introducing the mocking which resulted in this state, or even existing checks for it in non-test code, I'm receiving this state and carrying it forward in the same way as it has been done elsewhere (and, again, not simply in tests). Changing this is not something which belongs in the scope of this jira because it represents a rationalization/overhaul of mocking throughout this area (resource manager, schedulers), it is non-trivial and not specific to or properly within the scope of this change. Feel free to create a separate jira to improve the mocking throughout the code. The separate null-check for the amresourcerequest is necessitated by the apparently intentional behavior of unmanaged am's. And when this is possible? + if (rmContext.getScheduler() != null) again, in existing test paths, and existing code is tolerant of this as well, I'm merely carrying it forward - it would belong in the new jira as well, were one opened \t in leafqueue - I've checked and the spacing is consistent with the existing spacing in the file.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Regarding null check, I agree we can just go ahead and file a separated ticket to address them, it is not caused by your patch – As you said, it is in the past the mocked tests forgot to set some fields but no one triggered that. But I think it will be helpful to solve at least the getScheduler() problem with this patch together. Since additional check will make people in the future maintaining the code spending more time think about why such checks exist.

          And I've just checked the TestLeafQueue has some existing \t, but not much lines (about 20 lines), can you reformat them in your patch? We shouldn't make our code consistent with previous bad code style.

          Show
          leftnoteasy Wangda Tan added a comment - Regarding null check, I agree we can just go ahead and file a separated ticket to address them, it is not caused by your patch – As you said, it is in the past the mocked tests forgot to set some fields but no one triggered that. But I think it will be helpful to solve at least the getScheduler() problem with this patch together. Since additional check will make people in the future maintaining the code spending more time think about why such checks exist. And I've just checked the TestLeafQueue has some existing \t, but not much lines (about 20 lines), can you reformat them in your patch? We shouldn't make our code consistent with previous bad code style.
          Hide
          cwelch Craig Welch added a comment -

          reformatted some sections of testleafqueue, commenting the null check for rmcontext.getscheduler in ficaschedulerapp to see how widespread that condition is in the tests.

          Show
          cwelch Craig Welch added a comment - reformatted some sections of testleafqueue, commenting the null check for rmcontext.getscheduler in ficaschedulerapp to see how widespread that condition is in the tests.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12690221/YARN-2637.26.patch
          against trunk revision 0c4b112.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 5 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.reservation.TestCapacitySchedulerPlanFollower
          org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimits
          org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue
          org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler
          org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestReservations

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6248//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/6248//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6248//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12690221/YARN-2637.26.patch against trunk revision 0c4b112. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 5 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.reservation.TestCapacitySchedulerPlanFollower org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimits org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestReservations Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6248//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/6248//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6248//console This message is automatically generated.
          Hide
          cwelch Craig Welch added a comment -

          patch tests which fail when null check for rmcontext.getscheduler is not present in ficaschedulerapp

          Show
          cwelch Craig Welch added a comment - patch tests which fail when null check for rmcontext.getscheduler is not present in ficaschedulerapp
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12690392/YARN-2637.27.patch
          against trunk revision 4cd66f7.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 8 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.security.TestRMDelegationTokens

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6256//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/6256//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6256//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12690392/YARN-2637.27.patch against trunk revision 4cd66f7. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.security.TestRMDelegationTokens Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6256//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/6256//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6256//console This message is automatically generated.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Failed test should not relate to this patch. Could you check the findbugs warning? Besides the findbugs warning, +1.

          Thanks,

          Show
          leftnoteasy Wangda Tan added a comment - Failed test should not relate to this patch. Could you check the findbugs warning? Besides the findbugs warning, +1. Thanks,
          Hide
          jianhe Jian He added a comment -

          #max_am_number_for_each_user = #max_am_number * userlimit * userlimit_factor

          I think the max-am-per-user has not been fixed yet ?

          Show
          jianhe Jian He added a comment - #max_am_number_for_each_user = #max_am_number * userlimit * userlimit_factor I think the max-am-per-user has not been fixed yet ?
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12690430/YARN-2637.28.patch
          against trunk revision dd57c20.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 8 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6258//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6258//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12690430/YARN-2637.28.patch against trunk revision dd57c20. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6258//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6258//console This message is automatically generated.
          Hide
          cwelch Craig Welch added a comment -

          Findbugs was the result of changing the ratio of sync to unsync accesses which hit the findbugs limits, but not the pattern itself, which looks fine, so added fb exclusion. TestFairScheduler passes on my box with the change so build server related / not a real issue.

          Was not originally planning to address the max am percent for user as that wasn't the issue we kept encountering but forgot to mention this / edit the jira to reflect. However, I'm going to see what the impact would be of adding that now & then we can decide to include it or move to it's own jira.

          Show
          cwelch Craig Welch added a comment - Findbugs was the result of changing the ratio of sync to unsync accesses which hit the findbugs limits, but not the pattern itself, which looks fine, so added fb exclusion. TestFairScheduler passes on my box with the change so build server related / not a real issue. Was not originally planning to address the max am percent for user as that wasn't the issue we kept encountering but forgot to mention this / edit the jira to reflect. However, I'm going to see what the impact would be of adding that now & then we can decide to include it or move to it's own jira.
          Hide
          hitliuyi Yi Liu added a comment -

          Findbugs was the result of changing the ratio of sync to unsync accesses which hit the findbugs limits, but not the pattern itself, which looks fine, so added fb exclusion.

          Not exactly, in FairScheduler, it's a real issue, we need synchronized for resolveReservationQueueName.
          Already have a JIRA YARN-3010 to fix the findbugs...

          Show
          hitliuyi Yi Liu added a comment - Findbugs was the result of changing the ratio of sync to unsync accesses which hit the findbugs limits, but not the pattern itself, which looks fine, so added fb exclusion. Not exactly, in FairScheduler, it's a real issue, we need synchronized for resolveReservationQueueName . Already have a JIRA YARN-3010 to fix the findbugs...
          Hide
          cwelch Craig Welch added a comment -

          Take a go adding user am limit also (needs further verification/test), see test impact

          Show
          cwelch Craig Welch added a comment - Take a go adding user am limit also (needs further verification/test), see test impact
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12690467/YARN-2637.29.patch
          against trunk revision 788ee35.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 8 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6264//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/6264//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6264//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12690467/YARN-2637.29.patch against trunk revision 788ee35. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6264//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/6264//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6264//console This message is automatically generated.
          Hide
          cwelch Craig Welch added a comment -

          userAMLimit logic included as well, now with a test

          Show
          cwelch Craig Welch added a comment - userAMLimit logic included as well, now with a test
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Craig Welch,
          Thanks for updating, it's glad to see not too much code added to support max-am-resource-percent for user.

          Some comments:

          1. In LeafQueue,

          1.1 This check is not needed, right? Since we already counted consumedAMResource in each user:

                if (user.getActiveApplications() < getMaximumActiveApplicationsPerUser()) {
          

          1.2 maxActiveAppsUsingAbsCap and maxActiveApplicationsPerUser could be removed?

          1.3 Such log when activating in queue/user will be very frequent, using if debug-enabled, LOG.debug instead?

          +          LOG.info("not starting application as amIfStarted exceeds amLimit");
          +          continue;
          

          And print amIfStart/amLimit resource with the log instead of just says it exceeds?

          1.4 Comment should be "// Check user AM resource limit?"

          +      // Check user limits
                 User user = getUser(application.getUser());
          +      
          +      // AM Resource Limit
          

          2. Test
          Similar to what I suggested for test queue AM resource limit, testUserAMResourceLimitAccumulated should be really "accumulated", test case should contain >1 app allowed for each user.

          Wangda

          Show
          leftnoteasy Wangda Tan added a comment - Hi Craig Welch , Thanks for updating, it's glad to see not too much code added to support max-am-resource-percent for user. Some comments: 1. In LeafQueue, 1.1 This check is not needed, right? Since we already counted consumedAMResource in each user: if (user.getActiveApplications() < getMaximumActiveApplicationsPerUser()) { 1.2 maxActiveAppsUsingAbsCap and maxActiveApplicationsPerUser could be removed? 1.3 Such log when activating in queue/user will be very frequent, using if debug-enabled, LOG.debug instead? + LOG.info( "not starting application as amIfStarted exceeds amLimit" ); + continue ; And print amIfStart/amLimit resource with the log instead of just says it exceeds? 1.4 Comment should be "// Check user AM resource limit?" + // Check user limits User user = getUser(application.getUser()); + + // AM Resource Limit 2. Test Similar to what I suggested for test queue AM resource limit, testUserAMResourceLimitAccumulated should be really "accumulated", test case should contain >1 app allowed for each user. Wangda
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12690582/YARN-2637.30.patch
          against trunk revision 788ee35.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 8 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6270//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6270//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12690582/YARN-2637.30.patch against trunk revision 788ee35. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6270//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6270//console This message is automatically generated.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Updated title to better describe what we're trying to do.

          Show
          leftnoteasy Wangda Tan added a comment - Updated title to better describe what we're trying to do.
          Hide
          cwelch Craig Welch added a comment -

          See what happens when maxActiveApplications and maxActiveApplicationsPerUser are removed altogether

          Show
          cwelch Craig Welch added a comment - See what happens when maxActiveApplications and maxActiveApplicationsPerUser are removed altogether
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12690699/YARN-2637.31.patch
          against trunk revision ef237bd.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 9 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched
          org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue
          org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimits

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6277//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6277//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12690699/YARN-2637.31.patch against trunk revision ef237bd. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 9 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimits Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6277//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6277//console This message is automatically generated.
          Hide
          cwelch Craig Welch added a comment -

          Check tests using absoluteCapacity for userAmLimit

          Show
          cwelch Craig Welch added a comment - Check tests using absoluteCapacity for userAmLimit
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12690722/YARN-2637.32.patch
          against trunk revision ef237bd.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 9 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched
          org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue
          org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimits
          org.apache.hadoop.yarn.server.resourcemanager.security.TestRMDelegationTokens
          org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6278//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6278//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12690722/YARN-2637.32.patch against trunk revision ef237bd. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 9 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimits org.apache.hadoop.yarn.server.resourcemanager.security.TestRMDelegationTokens org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6278//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6278//console This message is automatically generated.
          Hide
          jianhe Jian He added a comment -

          Quick thing, YARN-3010 fixed the find bug warning. the findbugs exclusion in the patch maybe not needed.

          Show
          jianhe Jian He added a comment - Quick thing, YARN-3010 fixed the find bug warning. the findbugs exclusion in the patch maybe not needed.
          Hide
          cwelch Craig Welch added a comment -

          Should be down to one failing test, let's see

          Show
          cwelch Craig Welch added a comment - Should be down to one failing test, let's see
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12691012/YARN-2637.36.patch
          against trunk revision ae91b13.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 9 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6289//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6289//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12691012/YARN-2637.36.patch against trunk revision ae91b13. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 9 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6289//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6289//console This message is automatically generated.
          Hide
          cwelch Craig Welch added a comment -

          Should be the core logic but still plan to add some additional tests & to add in web ui/info for the amlimit values

          Show
          cwelch Craig Welch added a comment - Should be the core logic but still plan to add some additional tests & to add in web ui/info for the amlimit values
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12691457/YARN-2637.38.patch
          against trunk revision 8c3e888.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 9 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.security.TestRMDelegationTokens

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6297//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/6297//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6297//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12691457/YARN-2637.38.patch against trunk revision 8c3e888. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 9 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.security.TestRMDelegationTokens Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6297//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/6297//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6297//console This message is automatically generated.
          Hide
          cwelch Craig Welch added a comment -

          Now with web ui entries max am and max am user resource + application limit tests

          Show
          cwelch Craig Welch added a comment - Now with web ui entries max am and max am user resource + application limit tests
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12691482/YARN-2637.39.patch
          against trunk revision a260406.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 9 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6302//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/6302//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6302//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12691482/YARN-2637.39.patch against trunk revision a260406. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 9 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6302//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/6302//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6302//console This message is automatically generated.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Craig Welch,
          Thanks for updating, reviewed latest patch, some comments:

          LeafQueue.java:

          1) getUserAMResourceLimit()
          I think according to how we compute user-limit in LeafQueue, should we do it following way to compute user-am-resource-limit?

          user-am-resource-limit = 
          	am-resource-percent * min(
          		queue-max-capacity * max(user-limit, 1/#active-user),
          		queue-configured-capacity * user-limit-factor)?
          

          Thoughts?

          FiCaSchedulerApp.java:

          2) Is it necessary to get scheduler instance just for minimum allocation, do you think is it better to just get minimum allocation using:

              minAllocMb = rmContext.getConf().getInt(
              	YarnConfiguration.RM_SCHEDULER_MINIMUM_ALLOCATION_MB,
              	YarnConfiguration.DEFAULT_RM_SCHEDULER_MINIMUM_ALLOCATION_MB);
          

          Which can avoid creating mocked scheduler in some of test changes, including changes in RMContext.java.

          TestApplicationLimits:

          3) I think RMContext.getRMApps can be updated directly, it's no need to spy it, I suggest avoid spying objects in tests as much as we can. It's not very understandable and easily cause problems when we update implementations.
          4) Such spy invokes are unnecessary?

          FiCaSchedulerApp app_0_0 = 
                  spy(new FiCaSchedulerApp(appAttemptId_0_0, user_0, queue,
          

          5) Nobody is using it, should remove it.

            private FiCaSchedulerApp getMockApplication(int appId, String user) {
             return getMockApplication(appId, user,  Resource.newInstance(0, 0));
            }
          

          Wangda

          Show
          leftnoteasy Wangda Tan added a comment - Craig Welch , Thanks for updating, reviewed latest patch, some comments: LeafQueue.java: 1) getUserAMResourceLimit() I think according to how we compute user-limit in LeafQueue, should we do it following way to compute user-am-resource-limit? user-am-resource-limit = am-resource-percent * min( queue-max-capacity * max(user-limit, 1/#active-user), queue-configured-capacity * user-limit-factor)? Thoughts? FiCaSchedulerApp.java: 2) Is it necessary to get scheduler instance just for minimum allocation, do you think is it better to just get minimum allocation using: minAllocMb = rmContext.getConf().getInt( YarnConfiguration.RM_SCHEDULER_MINIMUM_ALLOCATION_MB, YarnConfiguration.DEFAULT_RM_SCHEDULER_MINIMUM_ALLOCATION_MB); Which can avoid creating mocked scheduler in some of test changes, including changes in RMContext.java. TestApplicationLimits: 3) I think RMContext.getRMApps can be updated directly, it's no need to spy it, I suggest avoid spying objects in tests as much as we can. It's not very understandable and easily cause problems when we update implementations. 4) Such spy invokes are unnecessary? FiCaSchedulerApp app_0_0 = spy( new FiCaSchedulerApp(appAttemptId_0_0, user_0, queue, 5) Nobody is using it, should remove it. private FiCaSchedulerApp getMockApplication( int appId, String user) { return getMockApplication(appId, user, Resource.newInstance(0, 0)); } Wangda
          Hide
          cwelch Craig Welch added a comment -

          Regarding the findbugs report for LeafQueue.lastClusterResource - access to lastClusterResource appears to be synchronized everywhere except getAbsActualCapacity, which I don't actually see being used anywhere - I'm going to add a findbugs exception and a comment on the method so that if it is used in the future synchronization can be addressed

          -re Wangda Tan 's latest:

          -re 1 - actually, user limits are based on absolute queue capacity rather than max capacity - this is apparently intentional because, although a queue can exceed it's absolute capacity, an individual user is not supposed to, hence my basing the user amlimit on the absolute capacity. The approach I use fits with the original logic in CSQueueUtils which allows a user the greater of the userlimit share of the absolute capacity or 1/# active users (so if there are fewer users active than would reach the userlimit they can use the full queue absolute capacity), the only correction being that we are using the actual value of resources by application masters instead of one based on minalloc

          -re 2 - Actually, the snippet provided is not quite correct, some schedulers provide a cpu value as well. In any case, for encapsulation reasons it's better to use the scheduler's value in case its means of determining this changes in the future.

          -re 3 - I can't see this making the slightest difference in understandability - since these test's paths don't populate the rmapps I would simply be individually putting mocked ones into the map instead of the single mock + matcher for all the apps. The way it is seems clearer to me as all of the mocking is together instead of distributing the (mock activity, if not mock framework...) process of putting mock rmapps into the collection throughout the test

          -re 4 - interesting, those were already there, but I also couldn't see why. Test passes fine without them, so I removed them

          -re 5 - removed

          uploading updated patch in a few

          Show
          cwelch Craig Welch added a comment - Regarding the findbugs report for LeafQueue.lastClusterResource - access to lastClusterResource appears to be synchronized everywhere except getAbsActualCapacity, which I don't actually see being used anywhere - I'm going to add a findbugs exception and a comment on the method so that if it is used in the future synchronization can be addressed -re Wangda Tan 's latest: -re 1 - actually, user limits are based on absolute queue capacity rather than max capacity - this is apparently intentional because, although a queue can exceed it's absolute capacity, an individual user is not supposed to, hence my basing the user amlimit on the absolute capacity. The approach I use fits with the original logic in CSQueueUtils which allows a user the greater of the userlimit share of the absolute capacity or 1/# active users (so if there are fewer users active than would reach the userlimit they can use the full queue absolute capacity), the only correction being that we are using the actual value of resources by application masters instead of one based on minalloc -re 2 - Actually, the snippet provided is not quite correct, some schedulers provide a cpu value as well. In any case, for encapsulation reasons it's better to use the scheduler's value in case its means of determining this changes in the future. -re 3 - I can't see this making the slightest difference in understandability - since these test's paths don't populate the rmapps I would simply be individually putting mocked ones into the map instead of the single mock + matcher for all the apps. The way it is seems clearer to me as all of the mocking is together instead of distributing the (mock activity, if not mock framework...) process of putting mock rmapps into the collection throughout the test -re 4 - interesting, those were already there, but I also couldn't see why. Test passes fine without them, so I removed them -re 5 - removed uploading updated patch in a few
          Hide
          hadoopqa Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12692009/YARN-2637.40.patch
          against trunk revision 10ac5ab.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 9 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6323//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6323//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12692009/YARN-2637.40.patch against trunk revision 10ac5ab. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 9 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6323//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6323//console This message is automatically generated.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Craig Welch,
          Thanks for updating, your reply all make sense to me,
          +1 for latest patch. (ver. 40)

          Wangda

          Show
          leftnoteasy Wangda Tan added a comment - Hi Craig Welch , Thanks for updating, your reply all make sense to me, +1 for latest patch. (ver. 40) Wangda
          Hide
          jianhe Jian He added a comment -

          lgtm too , thanks Craig Welch and Wangda Tan !

          Show
          jianhe Jian He added a comment - lgtm too , thanks Craig Welch and Wangda Tan !
          Hide
          jianhe Jian He added a comment -

          Committed to trunk and branch-2, thanks Craig !
          thanks Wangda, Junping for reviewing the patch!

          Show
          jianhe Jian He added a comment - Committed to trunk and branch-2, thanks Craig ! thanks Wangda, Junping for reviewing the patch!
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #6856 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6856/)
          YARN-2637. Fixed max-am-resource-percent calculation in CapacityScheduler when activating applications. Contributed by Craig Welch (jianhe: rev c53420f58364b11fbda1dace7679d45534533382)

          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRMRPCNodeUpdates.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #6856 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6856/ ) YARN-2637 . Fixed max-am-resource-percent calculation in CapacityScheduler when activating applications. Contributed by Craig Welch (jianhe: rev c53420f58364b11fbda1dace7679d45534533382) hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRMRPCNodeUpdates.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #73 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/73/)
          YARN-2637. Fixed max-am-resource-percent calculation in CapacityScheduler when activating applications. Contributed by Craig Welch (jianhe: rev c53420f58364b11fbda1dace7679d45534533382)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRMRPCNodeUpdates.java
          • hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java
          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #73 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/73/ ) YARN-2637 . Fixed max-am-resource-percent calculation in CapacityScheduler when activating applications. Contributed by Craig Welch (jianhe: rev c53420f58364b11fbda1dace7679d45534533382) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRMRPCNodeUpdates.java hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java hadoop-yarn-project/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #807 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/807/)
          YARN-2637. Fixed max-am-resource-percent calculation in CapacityScheduler when activating applications. Contributed by Craig Welch (jianhe: rev c53420f58364b11fbda1dace7679d45534533382)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRMRPCNodeUpdates.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java
          • hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #807 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/807/ ) YARN-2637 . Fixed max-am-resource-percent calculation in CapacityScheduler when activating applications. Contributed by Craig Welch (jianhe: rev c53420f58364b11fbda1dace7679d45534533382) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRMRPCNodeUpdates.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #70 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/70/)
          YARN-2637. Fixed max-am-resource-percent calculation in CapacityScheduler when activating applications. Contributed by Craig Welch (jianhe: rev c53420f58364b11fbda1dace7679d45534533382)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRMRPCNodeUpdates.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #70 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/70/ ) YARN-2637 . Fixed max-am-resource-percent calculation in CapacityScheduler when activating applications. Contributed by Craig Welch (jianhe: rev c53420f58364b11fbda1dace7679d45534533382) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRMRPCNodeUpdates.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #2005 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2005/)
          YARN-2637. Fixed max-am-resource-percent calculation in CapacityScheduler when activating applications. Contributed by Craig Welch (jianhe: rev c53420f58364b11fbda1dace7679d45534533382)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRMRPCNodeUpdates.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2005 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2005/ ) YARN-2637 . Fixed max-am-resource-percent calculation in CapacityScheduler when activating applications. Contributed by Craig Welch (jianhe: rev c53420f58364b11fbda1dace7679d45534533382) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRMRPCNodeUpdates.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #74 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/74/)
          YARN-2637. Fixed max-am-resource-percent calculation in CapacityScheduler when activating applications. Contributed by Craig Welch (jianhe: rev c53420f58364b11fbda1dace7679d45534533382)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java
          • hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRMRPCNodeUpdates.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #74 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/74/ ) YARN-2637 . Fixed max-am-resource-percent calculation in CapacityScheduler when activating applications. Contributed by Craig Welch (jianhe: rev c53420f58364b11fbda1dace7679d45534533382) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRMRPCNodeUpdates.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #2024 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2024/)
          YARN-2637. Fixed max-am-resource-percent calculation in CapacityScheduler when activating applications. Contributed by Craig Welch (jianhe: rev c53420f58364b11fbda1dace7679d45534533382)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRMRPCNodeUpdates.java
          • hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2024 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2024/ ) YARN-2637 . Fixed max-am-resource-percent calculation in CapacityScheduler when activating applications. Contributed by Craig Welch (jianhe: rev c53420f58364b11fbda1dace7679d45534533382) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRMRPCNodeUpdates.java hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
          Hide
          djp Junping Du added a comment -

          Hi Craig Welch and Jian He, I think MAPREDUCE-6189 could be related to this patch. Can you take a look at it? Thanks!

          Show
          djp Junping Du added a comment - Hi Craig Welch and Jian He , I think MAPREDUCE-6189 could be related to this patch. Can you take a look at it? Thanks!
          Hide
          vinodkv Vinod Kumar Vavilapalli added a comment -

          Pulled this into 2.6.1 as it is both an important fix and also to help backport other patches like YARN-3733.

          The patch applied cleanly for the most part except for a conflict in LeafQueue.java. Ran compilation and the tests TestAMRMRPCNodeUpdates, TestCapacitySchedulerPlanFollower, TestApplicationLimits,TestLeafQueue, TestReservations, TestFifoScheduler, TestRMWebServicesCapacitySched before the push.

          Show
          vinodkv Vinod Kumar Vavilapalli added a comment - Pulled this into 2.6.1 as it is both an important fix and also to help backport other patches like YARN-3733 . The patch applied cleanly for the most part except for a conflict in LeafQueue.java. Ran compilation and the tests TestAMRMRPCNodeUpdates, TestCapacitySchedulerPlanFollower, TestApplicationLimits,TestLeafQueue, TestReservations, TestFifoScheduler, TestRMWebServicesCapacitySched before the push.

            People

            • Assignee:
              cwelch Craig Welch
              Reporter:
              leftnoteasy Wangda Tan
            • Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development