Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-5540

scheduler spends too much time looking at empty priorities

    Details

    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      We're starting to see the capacity scheduler run out of scheduling horsepower when running 500-1000 applications on clusters with 4K nodes or so.

      This seems to be amplified by TEZ applications. TEZ applications have many more priorities (sometimes in the hundreds) than typical MR applications and therefore the loop in the scheduler which examines every priority within every running application, starts to be a hotspot. The priorities appear to stay around forever, even when there is no remaining resource request at that priority causing us to spend a lot of time looking at nothing.

      jstack snippet:

      "ResourceManager Event Processor" #28 prio=5 os_prio=0 tid=0x00007fc2d453e800 nid=0x22f3 runnable [0x00007fc2a8be2000]
         java.lang.Thread.State: RUNNABLE
              at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.getResourceRequest(SchedulerApplicationAttempt.java:210)
              - eliminated <0x00000005e73e5dc0> (a org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp)
              at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:852)
              - locked <0x00000005e73e5dc0> (a org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp)
              - locked <0x00000003006fcf60> (a org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue)
              at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainersToChildQueues(ParentQueue.java:527)
              - locked <0x00000003001b22f8> (a org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue)
              at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainers(ParentQueue.java:415)
              - locked <0x00000003001b22f8> (a org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue)
              at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1224)
              - locked <0x0000000300041e40> (a org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler)
      
      1. YARN-5540.001.patch
        13 kB
        Jason Lowe
      2. YARN-5540.002.patch
        13 kB
        Jason Lowe
      3. YARN-5540.003.patch
        13 kB
        Jason Lowe
      4. YARN-5540.004.patch
        13 kB
        Jason Lowe
      5. YARN-5540-branch-2.7.004.patch
        8 kB
        Jason Lowe
      6. YARN-5540-branch-2.8.004.patch
        10 kB
        Jason Lowe
      7. YARN-5540-branch-2.8.004.patch
        10 kB
        Jason Lowe

        Activity

        Hide
        asuresh Arun Suresh added a comment - - edited

        This would affect the FairScheduler too actually..
        looks like the AppSchedulingInfo::decResourceRequest() should be removing the empty HashMap it there are no entries returned against that priority / schedulerRequestKey.

        Show
        asuresh Arun Suresh added a comment - - edited This would affect the FairScheduler too actually.. looks like the AppSchedulingInfo::decResourceRequest() should be removing the empty HashMap it there are no entries returned against that priority / schedulerRequestKey.
        Hide
        jlowe Jason Lowe added a comment -

        I agree this applies to the FairScheduler as well, so updating the summary accordingly.

        Show
        jlowe Jason Lowe added a comment - I agree this applies to the FairScheduler as well, so updating the summary accordingly.
        Hide
        jlowe Jason Lowe added a comment -

        The main problem is that a scheduler key is never being removed from the collection of scheduler keys even when there are no further asks for that key. There's also separate issue where we can fail to cleanup the underlying hash map keys underneath a particular scheduler key, but I believe that's more of a memory issue than a performance issue. The performance issue occurs because the inner loop for schedulers is to iterate the scheduler keys, so it's important to remove keys we know are no longer necessary.

        When I first started this patch I tried to clean up everything with the bookkeeping including all the keys from the underlying requests hashmap. This made for a much larger patch and adds new, interesting NPE possibilities since requests could disappear in cases that are impossible today. For example the current code goes out of its way to avoid removing the ANY request for a scheduler key. As such I decided to focus just on the scheduler key set size problem which is a more focused patch that should still fix the main problem behind this JIRA.

        Attaching a patch for trunk for review. The main idea is to reference count the various scheduler keys and remove them once their refcount goes to zero. We increment the refcount for a key when the corresponding ANY request goes from zero to non-zero or if there's a container increment request against that scheduler key when there wasn't one before. Similarly we decrement the refcount for a key when the corresponding ANY request goes from non-zero to zero or if there are no container increment requests when there were some before. When a scheduler key refcount goes from 0 to 1 it is inserted in the collection of scheduler keys, and when it goes from 1 to 0 it is removed from the collection. This also has the nice property that deactivation checks simply become an isEmpty check on the collection of scheduler keys rather than a loop over that collection.

        Once we're agreed on a version for trunk I'll put up the separate patches for branch-2.8 and branch-2.7 due to changes from YARN-5392 and YARN-1651, respectively.

        Show
        jlowe Jason Lowe added a comment - The main problem is that a scheduler key is never being removed from the collection of scheduler keys even when there are no further asks for that key. There's also separate issue where we can fail to cleanup the underlying hash map keys underneath a particular scheduler key, but I believe that's more of a memory issue than a performance issue. The performance issue occurs because the inner loop for schedulers is to iterate the scheduler keys, so it's important to remove keys we know are no longer necessary. When I first started this patch I tried to clean up everything with the bookkeeping including all the keys from the underlying requests hashmap. This made for a much larger patch and adds new, interesting NPE possibilities since requests could disappear in cases that are impossible today. For example the current code goes out of its way to avoid removing the ANY request for a scheduler key. As such I decided to focus just on the scheduler key set size problem which is a more focused patch that should still fix the main problem behind this JIRA. Attaching a patch for trunk for review. The main idea is to reference count the various scheduler keys and remove them once their refcount goes to zero. We increment the refcount for a key when the corresponding ANY request goes from zero to non-zero or if there's a container increment request against that scheduler key when there wasn't one before. Similarly we decrement the refcount for a key when the corresponding ANY request goes from non-zero to zero or if there are no container increment requests when there were some before. When a scheduler key refcount goes from 0 to 1 it is inserted in the collection of scheduler keys, and when it goes from 1 to 0 it is removed from the collection. This also has the nice property that deactivation checks simply become an isEmpty check on the collection of scheduler keys rather than a loop over that collection. Once we're agreed on a version for trunk I'll put up the separate patches for branch-2.8 and branch-2.7 due to changes from YARN-5392 and YARN-1651 , respectively.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 20s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 7m 21s trunk passed
        +1 compile 0m 34s trunk passed
        +1 checkstyle 0m 22s trunk passed
        +1 mvnsite 0m 38s trunk passed
        +1 mvneclipse 0m 17s trunk passed
        +1 findbugs 1m 1s trunk passed
        +1 javadoc 0m 21s trunk passed
        +1 mvninstall 0m 32s the patch passed
        +1 compile 0m 29s the patch passed
        +1 javac 0m 29s the patch passed
        -1 checkstyle 0m 17s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 6 unchanged - 2 fixed = 7 total (was 8)
        +1 mvnsite 0m 35s the patch passed
        +1 mvneclipse 0m 13s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 4s the patch passed
        +1 javadoc 0m 19s the patch passed
        +1 unit 37m 37s hadoop-yarn-server-resourcemanager in the patch passed.
        +1 asflicense 0m 14s The patch does not generate ASF License warnings.
        52m 53s



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:9560f25
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12825118/YARN-5540.001.patch
        JIRA Issue YARN-5540
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux bd916236b660 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 8aae8d6
        Default Java 1.8.0_101
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/12864/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12864/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/12864/console
        Powered by Apache Yetus 0.3.0 http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 20s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 7m 21s trunk passed +1 compile 0m 34s trunk passed +1 checkstyle 0m 22s trunk passed +1 mvnsite 0m 38s trunk passed +1 mvneclipse 0m 17s trunk passed +1 findbugs 1m 1s trunk passed +1 javadoc 0m 21s trunk passed +1 mvninstall 0m 32s the patch passed +1 compile 0m 29s the patch passed +1 javac 0m 29s the patch passed -1 checkstyle 0m 17s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 6 unchanged - 2 fixed = 7 total (was 8) +1 mvnsite 0m 35s the patch passed +1 mvneclipse 0m 13s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 4s the patch passed +1 javadoc 0m 19s the patch passed +1 unit 37m 37s hadoop-yarn-server-resourcemanager in the patch passed. +1 asflicense 0m 14s The patch does not generate ASF License warnings. 52m 53s Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12825118/YARN-5540.001.patch JIRA Issue YARN-5540 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux bd916236b660 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 8aae8d6 Default Java 1.8.0_101 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/12864/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12864/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/12864/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
        Hide
        asuresh Arun Suresh added a comment -

        Thanks for the patch Jason Lowe

        Minor nits:

        1. you can remove the TODO: Shouldn't we activate even if numContainers = 0 since you are now taking care of it.
        2. You do not really need to pass the schedulerKey around since you can extract it from the request using SchedulerRequestKey::create(ResourceRequest) but since some of the existing methods still pass it around, its not a must fix for me.

        Thinking out loud here... shouldn't we probably merge the 2 data structures (the resourceRequestMap ConcurrentHashMap and the schedulerKeys TreeSet) with a ConcurrentSkipListMap and return the keySet() when getSchedulerKeys() is called.

        Show
        asuresh Arun Suresh added a comment - Thanks for the patch Jason Lowe Minor nits: you can remove the TODO: Shouldn't we activate even if numContainers = 0 since you are now taking care of it. You do not really need to pass the schedulerKey around since you can extract it from the request using SchedulerRequestKey::create(ResourceRequest) but since some of the existing methods still pass it around, its not a must fix for me. Thinking out loud here... shouldn't we probably merge the 2 data structures (the resourceRequestMap ConcurrentHashMap and the schedulerKeys TreeSet) with a ConcurrentSkipListMap and return the keySet() when getSchedulerKeys() is called.
        Hide
        jlowe Jason Lowe added a comment -

        Thanks for the review!

        you can remove the TODO: Shouldn't we activate even if numContainers = 0 since you are now taking care of it.

        Unless I'm missing something it's still not handling it. Activation will only occur if the ANY request numContainers > 0 because we won't go through that TODO commented code if numContainers <= 0.

        You do not really need to pass the schedulerKey around since you can extract it from the request

        True, but that's significantly more expensive since it requires object creation and adds to the garbage collection overhead. As such I thought it was far preferable to pass the existing object than create a copy.

        shouldn't we probably merge the 2 data structures (the resourceRequestMap ConcurrentHashMap and the schedulerKeys TreeSet) with a ConcurrentSkipListMap

        No, that will break the delta protocol. There are cases when we want to remove the scheduler key from the collection but not remove the map of requests that go with that key. In other words, there are cases where there are no more containers to allocate for a scheduler key but the RM should not forget the outstanding locality-specific requests that have been sent for that key. The concurrent task limiting feature of MAPREDUCE-5583 is one example that leverages this. The MapReduce job sends the full list of locality requests up front then artificially lowers the ANY request count to the concurrent limit. As requests are fulfilled it bumps the ANY request back up to the concurrent limit without re-sending the locality-specific requests. The RM should still remember them because it's a delta protocol, so there's no need to re-send them. If we pulled out the entire request map when there are no more containers to allocate for that scheduler key then the RM would forget the locality-specific requests when the ANY request is bumped back up and break the delta protocol semantics.

        Show
        jlowe Jason Lowe added a comment - Thanks for the review! you can remove the TODO: Shouldn't we activate even if numContainers = 0 since you are now taking care of it. Unless I'm missing something it's still not handling it. Activation will only occur if the ANY request numContainers > 0 because we won't go through that TODO commented code if numContainers <= 0. You do not really need to pass the schedulerKey around since you can extract it from the request True, but that's significantly more expensive since it requires object creation and adds to the garbage collection overhead. As such I thought it was far preferable to pass the existing object than create a copy. shouldn't we probably merge the 2 data structures (the resourceRequestMap ConcurrentHashMap and the schedulerKeys TreeSet) with a ConcurrentSkipListMap No, that will break the delta protocol. There are cases when we want to remove the scheduler key from the collection but not remove the map of requests that go with that key. In other words, there are cases where there are no more containers to allocate for a scheduler key but the RM should not forget the outstanding locality-specific requests that have been sent for that key. The concurrent task limiting feature of MAPREDUCE-5583 is one example that leverages this. The MapReduce job sends the full list of locality requests up front then artificially lowers the ANY request count to the concurrent limit. As requests are fulfilled it bumps the ANY request back up to the concurrent limit without re-sending the locality-specific requests . The RM should still remember them because it's a delta protocol, so there's no need to re-send them. If we pulled out the entire request map when there are no more containers to allocate for that scheduler key then the RM would forget the locality-specific requests when the ANY request is bumped back up and break the delta protocol semantics.
        Hide
        asuresh Arun Suresh added a comment -

        Unless I'm missing something it's still not handling it. Activation will only occur if the ANY request numContainers > 0 because we won't go

        Aah.. true, I mistook the lastRequestContainers for the numContainers. I guess the TODO should be moved before the
        if (request.getNumContainers() <= 0)

        The concurrent task limiting feature of MAPREDUCE-5583 is one example that leverages this.

        Thanks for the explanation. While this seems like a really cool way of solving the limiting problem. It is in my opinion leveraging what is an un-documented API (the fact that queue demand is updated only with the ANY request). For instance It is not even possible to do this using the AMRMClient. One way to do this might be to leverage the YARN Reservation System which allows you to specify task parallelism makes it possible by adjusting the queues dynamically - but we can discuss this outside of this JIRA.

        There are cases when we want to remove the scheduler key from the collection but not remove the map of requests that go with that key

        Looks like the YARN-1651 does the opposite as well...

        Show
        asuresh Arun Suresh added a comment - Unless I'm missing something it's still not handling it. Activation will only occur if the ANY request numContainers > 0 because we won't go Aah.. true, I mistook the lastRequestContainers for the numContainers. I guess the TODO should be moved before the if (request.getNumContainers() <= 0) The concurrent task limiting feature of MAPREDUCE-5583 is one example that leverages this. Thanks for the explanation. While this seems like a really cool way of solving the limiting problem. It is in my opinion leveraging what is an un-documented API (the fact that queue demand is updated only with the ANY request). For instance It is not even possible to do this using the AMRMClient. One way to do this might be to leverage the YARN Reservation System which allows you to specify task parallelism makes it possible by adjusting the queues dynamically - but we can discuss this outside of this JIRA. There are cases when we want to remove the scheduler key from the collection but not remove the map of requests that go with that key Looks like the YARN-1651 does the opposite as well...
        Hide
        jlowe Jason Lowe added a comment -

        Cancelling the patch because there's going to be a problem with ConcurrentModificationException. Since the scheduler keys are not in a concurrent map, anything that tries to iterate it while entries are removed without that iterator is going to be a problem. So when the scheduler loop iterates the keys, the container allocation could remove a scheduler key and cause the CME. Either the scheduler loop needs to be the one that removes the key, we need to be iterating a copy (undesirable), or the key collection needs to support concurrent modification.

        I guess the TODO should be moved

        Good point. I'll fix that in the next version of the patch.

        It is in my opinion leveraging what is an un-documented API (the fact that queue demand is updated only with the ANY request).

        The only documentation of the YARN allocation protocol for a while was the MapReduce AM code, and that code leveraged this fact well before MAPREDUCE-5583. Asking for a single container on either rack1/host1, rack2/host2, or rack3/host3 doesn't allocate three containers, it allocates one only because the ANY request is 1. Also looking at the core of the RM schedulers, it's always been about the ANY request for whether or not applications get resources. Lots of people based their early custom apps on MapReduce AM code or by looking at the YARN scheduler code, so I don't think we can change that behavior without risking breaking those apps. It's also interesting that one of the designers of the YARN allocation protocol suggested the ANY "hack" as the way forward on MAPREDUCE-5583. (See this comment.)

        One way to do this might be to leverage the YARN Reservation System

        Interesting idea, but that would limit the resources for the entire app not just the requested phases (i.e.: user often want to limit maps but not reduces or vice-versa).

        Looks like the YARN-1651 does the opposite as well...

        YARN-1651 is about updating existing allocations for specific existing containers rather than new allocations. It doesn't have the concept of rack/ANY like the allocate protocol. Or am I missing something here?

        Show
        jlowe Jason Lowe added a comment - Cancelling the patch because there's going to be a problem with ConcurrentModificationException. Since the scheduler keys are not in a concurrent map, anything that tries to iterate it while entries are removed without that iterator is going to be a problem. So when the scheduler loop iterates the keys, the container allocation could remove a scheduler key and cause the CME. Either the scheduler loop needs to be the one that removes the key, we need to be iterating a copy (undesirable), or the key collection needs to support concurrent modification. I guess the TODO should be moved Good point. I'll fix that in the next version of the patch. It is in my opinion leveraging what is an un-documented API (the fact that queue demand is updated only with the ANY request). The only documentation of the YARN allocation protocol for a while was the MapReduce AM code, and that code leveraged this fact well before MAPREDUCE-5583 . Asking for a single container on either rack1/host1, rack2/host2, or rack3/host3 doesn't allocate three containers, it allocates one only because the ANY request is 1. Also looking at the core of the RM schedulers, it's always been about the ANY request for whether or not applications get resources. Lots of people based their early custom apps on MapReduce AM code or by looking at the YARN scheduler code, so I don't think we can change that behavior without risking breaking those apps. It's also interesting that one of the designers of the YARN allocation protocol suggested the ANY "hack" as the way forward on MAPREDUCE-5583 . (See this comment .) One way to do this might be to leverage the YARN Reservation System Interesting idea, but that would limit the resources for the entire app not just the requested phases (i.e.: user often want to limit maps but not reduces or vice-versa). Looks like the YARN-1651 does the opposite as well... YARN-1651 is about updating existing allocations for specific existing containers rather than new allocations. It doesn't have the concept of rack/ANY like the allocate protocol. Or am I missing something here?
        Hide
        jlowe Jason Lowe added a comment -

        Updating the patch to use a ConcurrentSkipListMap instead of a TreeMap. This is not going to be as cheap as having the iterator do the removal, but it's far less code change and more robust if we allow other threads to update the requests as the scheduler is examining them.

        I also updated the TODO comment and added a check for the CME possibility which catches the error from the previous patch.

        Show
        jlowe Jason Lowe added a comment - Updating the patch to use a ConcurrentSkipListMap instead of a TreeMap. This is not going to be as cheap as having the iterator do the removal, but it's far less code change and more robust if we allow other threads to update the requests as the scheduler is examining them. I also updated the TODO comment and added a check for the CME possibility which catches the error from the previous patch.
        Hide
        hadoopqa Hadoop QA added a comment -
        +1 overall



        Vote Subsystem Runtime Comment
        0 reexec 13m 11s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 7m 21s trunk passed
        +1 compile 0m 33s trunk passed
        +1 checkstyle 0m 20s trunk passed
        +1 mvnsite 0m 38s trunk passed
        +1 mvneclipse 0m 17s trunk passed
        +1 findbugs 0m 59s trunk passed
        +1 javadoc 0m 20s trunk passed
        +1 mvninstall 0m 31s the patch passed
        +1 compile 0m 28s the patch passed
        +1 javac 0m 28s the patch passed
        +1 checkstyle 0m 17s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 6 unchanged - 2 fixed = 6 total (was 8)
        +1 mvnsite 0m 34s the patch passed
        +1 mvneclipse 0m 13s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 2s the patch passed
        +1 javadoc 0m 18s the patch passed
        +1 unit 33m 37s hadoop-yarn-server-resourcemanager in the patch passed.
        +1 asflicense 0m 18s The patch does not generate ASF License warnings.
        61m 34s



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:9560f25
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12825677/YARN-5540.002.patch
        JIRA Issue YARN-5540
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux d97abe591d5c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 9ef632f
        Default Java 1.8.0_101
        findbugs v3.0.0
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12904/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/12904/console
        Powered by Apache Yetus 0.3.0 http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 13m 11s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 7m 21s trunk passed +1 compile 0m 33s trunk passed +1 checkstyle 0m 20s trunk passed +1 mvnsite 0m 38s trunk passed +1 mvneclipse 0m 17s trunk passed +1 findbugs 0m 59s trunk passed +1 javadoc 0m 20s trunk passed +1 mvninstall 0m 31s the patch passed +1 compile 0m 28s the patch passed +1 javac 0m 28s the patch passed +1 checkstyle 0m 17s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 6 unchanged - 2 fixed = 6 total (was 8) +1 mvnsite 0m 34s the patch passed +1 mvneclipse 0m 13s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 2s the patch passed +1 javadoc 0m 18s the patch passed +1 unit 33m 37s hadoop-yarn-server-resourcemanager in the patch passed. +1 asflicense 0m 18s The patch does not generate ASF License warnings. 61m 34s Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12825677/YARN-5540.002.patch JIRA Issue YARN-5540 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux d97abe591d5c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9ef632f Default Java 1.8.0_101 findbugs v3.0.0 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12904/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/12904/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Thanks Jason Lowe, patch generally looks good, only a few minor comments:
        1) It might be better to rename add/removeSchedulerKeyReference to increase/decreaseSchedulerKeyReference since schedulerKey ref will not be removed unless value == 0.
        2) Do you think we should remove:

        // TODO: Shouldn't we activate even if numContainers = 0?

        Show
        leftnoteasy Wangda Tan added a comment - Thanks Jason Lowe , patch generally looks good, only a few minor comments: 1) It might be better to rename add/removeSchedulerKeyReference to increase/decreaseSchedulerKeyReference since schedulerKey ref will not be removed unless value == 0. 2) Do you think we should remove: // TODO: Shouldn't we activate even if numContainers = 0?
        Hide
        jlowe Jason Lowe added a comment -

        Thanks for the review, Wangda!

        Updated the method names per the suggestion. I removed the TODO comment, since Arun also asked about it above. This code didn't change the behavior surrounding the question raised by the TODO. However I think it's safe to assume at this point that we are not going to consider activating apps when the total container ask is zero.

        Show
        jlowe Jason Lowe added a comment - Thanks for the review, Wangda! Updated the method names per the suggestion. I removed the TODO comment, since Arun also asked about it above. This code didn't change the behavior surrounding the question raised by the TODO. However I think it's safe to assume at this point that we are not going to consider activating apps when the total container ask is zero.
        Hide
        jlowe Jason Lowe added a comment -

        Oops, just realized patch 003 is missing the TODO comment removal. Fixed in 004.

        Show
        jlowe Jason Lowe added a comment - Oops, just realized patch 003 is missing the TODO comment removal. Fixed in 004.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 19s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 6m 41s trunk passed
        +1 compile 0m 33s trunk passed
        +1 checkstyle 0m 20s trunk passed
        +1 mvnsite 0m 39s trunk passed
        +1 mvneclipse 0m 17s trunk passed
        +1 findbugs 0m 56s trunk passed
        +1 javadoc 0m 23s trunk passed
        +1 mvninstall 0m 37s the patch passed
        +1 compile 0m 36s the patch passed
        +1 javac 0m 36s the patch passed
        +1 checkstyle 0m 19s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 6 unchanged - 2 fixed = 6 total (was 8)
        +1 mvnsite 0m 44s the patch passed
        +1 mvneclipse 0m 16s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 16s the patch passed
        +1 javadoc 0m 22s the patch passed
        -1 unit 38m 16s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 asflicense 0m 17s The patch does not generate ASF License warnings.
        53m 34s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.TestRMAdminService



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:9560f25
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12828282/YARN-5540.004.patch
        JIRA Issue YARN-5540
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 8e6c89b22b23 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / e793309
        Default Java 1.8.0_101
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-YARN-Build/13096/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/13096/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13096/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/13096/console
        Powered by Apache Yetus 0.3.0 http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 19s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 6m 41s trunk passed +1 compile 0m 33s trunk passed +1 checkstyle 0m 20s trunk passed +1 mvnsite 0m 39s trunk passed +1 mvneclipse 0m 17s trunk passed +1 findbugs 0m 56s trunk passed +1 javadoc 0m 23s trunk passed +1 mvninstall 0m 37s the patch passed +1 compile 0m 36s the patch passed +1 javac 0m 36s the patch passed +1 checkstyle 0m 19s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 6 unchanged - 2 fixed = 6 total (was 8) +1 mvnsite 0m 44s the patch passed +1 mvneclipse 0m 16s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 16s the patch passed +1 javadoc 0m 22s the patch passed -1 unit 38m 16s hadoop-yarn-server-resourcemanager in the patch failed. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 53m 34s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.TestRMAdminService Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12828282/YARN-5540.004.patch JIRA Issue YARN-5540 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 8e6c89b22b23 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / e793309 Default Java 1.8.0_101 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-YARN-Build/13096/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/13096/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13096/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/13096/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 22s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 7m 49s trunk passed
        +1 compile 0m 32s trunk passed
        +1 checkstyle 0m 20s trunk passed
        +1 mvnsite 0m 38s trunk passed
        +1 mvneclipse 0m 17s trunk passed
        +1 findbugs 0m 58s trunk passed
        +1 javadoc 0m 20s trunk passed
        +1 mvninstall 0m 32s the patch passed
        +1 compile 0m 29s the patch passed
        +1 javac 0m 29s the patch passed
        +1 checkstyle 0m 18s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 6 unchanged - 2 fixed = 6 total (was 8)
        +1 mvnsite 0m 38s the patch passed
        +1 mvneclipse 0m 14s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 12s the patch passed
        +1 javadoc 0m 18s the patch passed
        -1 unit 38m 13s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 asflicense 0m 17s The patch does not generate ASF License warnings.
        54m 5s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:9560f25
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12828282/YARN-5540.004.patch
        JIRA Issue YARN-5540
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux ae4258fb4f20 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / e793309
        Default Java 1.8.0_101
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-YARN-Build/13097/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/13097/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13097/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/13097/console
        Powered by Apache Yetus 0.3.0 http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 22s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 7m 49s trunk passed +1 compile 0m 32s trunk passed +1 checkstyle 0m 20s trunk passed +1 mvnsite 0m 38s trunk passed +1 mvneclipse 0m 17s trunk passed +1 findbugs 0m 58s trunk passed +1 javadoc 0m 20s trunk passed +1 mvninstall 0m 32s the patch passed +1 compile 0m 29s the patch passed +1 javac 0m 29s the patch passed +1 checkstyle 0m 18s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 6 unchanged - 2 fixed = 6 total (was 8) +1 mvnsite 0m 38s the patch passed +1 mvneclipse 0m 14s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 12s the patch passed +1 javadoc 0m 18s the patch passed -1 unit 38m 13s hadoop-yarn-server-resourcemanager in the patch failed. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 54m 5s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12828282/YARN-5540.004.patch JIRA Issue YARN-5540 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux ae4258fb4f20 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / e793309 Default Java 1.8.0_101 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-YARN-Build/13097/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/13097/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13097/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/13097/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
        Hide
        jlowe Jason Lowe added a comment -

        The two test failures appear to be unrelated. Filed YARN-5652 for the TestRMAdminService failure and YARN-5653 for the TestNodeLabelContainerAllocation failure.

        Show
        jlowe Jason Lowe added a comment - The two test failures appear to be unrelated. Filed YARN-5652 for the TestRMAdminService failure and YARN-5653 for the TestNodeLabelContainerAllocation failure.
        Hide
        leftnoteasy Wangda Tan added a comment -

        +1 to latest patch, thanks Jason Lowe.

        Show
        leftnoteasy Wangda Tan added a comment - +1 to latest patch, thanks Jason Lowe .
        Hide
        jlowe Jason Lowe added a comment -

        Thanks for the reviews!

        Attaching the patches for 2.8 and 2.7. 2.8 was quite a bit different since that branch doesn't have the priority -> scheduler key change. 2.7 was even simpler than the 2.8 patch since it doesn't have the container increase/decrease functionality so we don't need to do refcounting and can get away with a ConcurrentSkipListSet.

        If someone can verify the patches for branch-2.8 and branch-2.7 look good as well then I'd be happy to commit this.

        Show
        jlowe Jason Lowe added a comment - Thanks for the reviews! Attaching the patches for 2.8 and 2.7. 2.8 was quite a bit different since that branch doesn't have the priority -> scheduler key change. 2.7 was even simpler than the 2.8 patch since it doesn't have the container increase/decrease functionality so we don't need to do refcounting and can get away with a ConcurrentSkipListSet. If someone can verify the patches for branch-2.8 and branch-2.7 look good as well then I'd be happy to commit this.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 23s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 7m 38s branch-2.7 passed
        +1 compile 0m 25s branch-2.7 passed with JDK v1.8.0_101
        +1 compile 0m 27s branch-2.7 passed with JDK v1.7.0_111
        +1 checkstyle 0m 21s branch-2.7 passed
        +1 mvnsite 0m 36s branch-2.7 passed
        +1 mvneclipse 0m 17s branch-2.7 passed
        -1 findbugs 1m 3s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager in branch-2.7 has 1 extant Findbugs warnings.
        +1 javadoc 0m 19s branch-2.7 passed with JDK v1.8.0_101
        +1 javadoc 0m 24s branch-2.7 passed with JDK v1.7.0_111
        +1 mvninstall 0m 27s the patch passed
        +1 compile 0m 23s the patch passed with JDK v1.8.0_101
        +1 javac 0m 23s the patch passed
        +1 compile 0m 25s the patch passed with JDK v1.7.0_111
        +1 javac 0m 25s the patch passed
        -1 checkstyle 0m 16s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 2 new + 176 unchanged - 5 fixed = 178 total (was 181)
        +1 mvnsite 0m 30s the patch passed
        +1 mvneclipse 0m 13s the patch passed
        -1 whitespace 0m 0s The patch has 2295 line(s) that end in whitespace. Use git apply --whitespace=fix.
        -1 whitespace 0m 59s The patch 74 line(s) with tabs.
        +1 findbugs 1m 11s the patch passed
        +1 javadoc 0m 15s the patch passed with JDK v1.8.0_101
        +1 javadoc 0m 22s the patch passed with JDK v1.7.0_111
        -1 unit 48m 52s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_101.
        -1 unit 50m 6s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_111.
        +1 asflicense 0m 17s The patch does not generate ASF License warnings.
        117m 28s



        Reason Tests
        JDK v1.8.0_101 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
          hadoop.yarn.server.resourcemanager.TestClientRMTokens
        JDK v1.7.0_111 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
          hadoop.yarn.server.resourcemanager.TestClientRMTokens



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:c420dfe
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12828862/YARN-5540-branch-2.7.004.patch
        JIRA Issue YARN-5540
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 0dbf15753e25 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision branch-2.7 / a8b7817
        Default Java 1.7.0_111
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_101 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111
        findbugs v3.0.0
        findbugs https://builds.apache.org/job/PreCommit-YARN-Build/13124/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/13124/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        whitespace https://builds.apache.org/job/PreCommit-YARN-Build/13124/artifact/patchprocess/whitespace-eol.txt
        whitespace https://builds.apache.org/job/PreCommit-YARN-Build/13124/artifact/patchprocess/whitespace-tabs.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/13124/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_101.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/13124/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_111.txt
        unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/13124/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_101.txt https://builds.apache.org/job/PreCommit-YARN-Build/13124/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_111.txt
        JDK v1.7.0_111 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13124/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/13124/console
        Powered by Apache Yetus 0.3.0 http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 23s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 7m 38s branch-2.7 passed +1 compile 0m 25s branch-2.7 passed with JDK v1.8.0_101 +1 compile 0m 27s branch-2.7 passed with JDK v1.7.0_111 +1 checkstyle 0m 21s branch-2.7 passed +1 mvnsite 0m 36s branch-2.7 passed +1 mvneclipse 0m 17s branch-2.7 passed -1 findbugs 1m 3s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager in branch-2.7 has 1 extant Findbugs warnings. +1 javadoc 0m 19s branch-2.7 passed with JDK v1.8.0_101 +1 javadoc 0m 24s branch-2.7 passed with JDK v1.7.0_111 +1 mvninstall 0m 27s the patch passed +1 compile 0m 23s the patch passed with JDK v1.8.0_101 +1 javac 0m 23s the patch passed +1 compile 0m 25s the patch passed with JDK v1.7.0_111 +1 javac 0m 25s the patch passed -1 checkstyle 0m 16s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 2 new + 176 unchanged - 5 fixed = 178 total (was 181) +1 mvnsite 0m 30s the patch passed +1 mvneclipse 0m 13s the patch passed -1 whitespace 0m 0s The patch has 2295 line(s) that end in whitespace. Use git apply --whitespace=fix. -1 whitespace 0m 59s The patch 74 line(s) with tabs. +1 findbugs 1m 11s the patch passed +1 javadoc 0m 15s the patch passed with JDK v1.8.0_101 +1 javadoc 0m 22s the patch passed with JDK v1.7.0_111 -1 unit 48m 52s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_101. -1 unit 50m 6s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_111. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 117m 28s Reason Tests JDK v1.8.0_101 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.TestClientRMTokens JDK v1.7.0_111 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.TestClientRMTokens Subsystem Report/Notes Docker Image:yetus/hadoop:c420dfe JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12828862/YARN-5540-branch-2.7.004.patch JIRA Issue YARN-5540 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 0dbf15753e25 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2.7 / a8b7817 Default Java 1.7.0_111 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_101 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-YARN-Build/13124/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/13124/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/13124/artifact/patchprocess/whitespace-eol.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/13124/artifact/patchprocess/whitespace-tabs.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/13124/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_101.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/13124/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_111.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/13124/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_101.txt https://builds.apache.org/job/PreCommit-YARN-Build/13124/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_111.txt JDK v1.7.0_111 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13124/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/13124/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
        Hide
        asuresh Arun Suresh added a comment -

        +1 pending branch-2.8 jenkins.
        Thanks Jason Lowe..

        Show
        asuresh Arun Suresh added a comment - +1 pending branch-2.8 jenkins. Thanks Jason Lowe ..
        Hide
        jlowe Jason Lowe added a comment -

        Thanks for the review, Arun! Posting the branch-2.8 patch again to trigger the Jenkins run.

        Show
        jlowe Jason Lowe added a comment - Thanks for the review, Arun! Posting the branch-2.8 patch again to trigger the Jenkins run.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 13m 11s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 9m 50s branch-2.8 passed
        +1 compile 0m 38s branch-2.8 passed with JDK v1.8.0_101
        +1 compile 0m 30s branch-2.8 passed with JDK v1.7.0_111
        +1 checkstyle 0m 19s branch-2.8 passed
        +1 mvnsite 0m 42s branch-2.8 passed
        +1 mvneclipse 0m 20s branch-2.8 passed
        +1 findbugs 1m 17s branch-2.8 passed
        +1 javadoc 0m 27s branch-2.8 passed with JDK v1.8.0_101
        +1 javadoc 0m 24s branch-2.8 passed with JDK v1.7.0_111
        +1 mvninstall 0m 34s the patch passed
        +1 compile 0m 35s the patch passed with JDK v1.8.0_101
        +1 javac 0m 35s the patch passed
        +1 compile 0m 31s the patch passed with JDK v1.7.0_111
        +1 javac 0m 31s the patch passed
        +1 checkstyle 0m 15s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 16 unchanged - 3 fixed = 16 total (was 19)
        +1 mvnsite 0m 36s the patch passed
        +1 mvneclipse 0m 14s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 26s the patch passed
        +1 javadoc 0m 23s the patch passed with JDK v1.8.0_101
        +1 javadoc 0m 20s the patch passed with JDK v1.7.0_111
        -1 unit 70m 10s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_101.
        -1 unit 71m 2s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_111.
        +1 asflicense 0m 24s The patch does not generate ASF License warnings.
        175m 18s



        Reason Tests
        JDK v1.8.0_101 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.TestAMAuthorization
        JDK v1.7.0_111 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.TestAMAuthorization



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:5af2af1
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12829191/YARN-5540-branch-2.8.004.patch
        JIRA Issue YARN-5540
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 0318c5a46cc2 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision branch-2.8 / a1cc90b
        Default Java 1.7.0_111
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_101 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-YARN-Build/13148/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_101.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/13148/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_111.txt
        unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/13148/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_101.txt https://builds.apache.org/job/PreCommit-YARN-Build/13148/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_111.txt
        JDK v1.7.0_111 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13148/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/13148/console
        Powered by Apache Yetus 0.3.0 http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 13m 11s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 9m 50s branch-2.8 passed +1 compile 0m 38s branch-2.8 passed with JDK v1.8.0_101 +1 compile 0m 30s branch-2.8 passed with JDK v1.7.0_111 +1 checkstyle 0m 19s branch-2.8 passed +1 mvnsite 0m 42s branch-2.8 passed +1 mvneclipse 0m 20s branch-2.8 passed +1 findbugs 1m 17s branch-2.8 passed +1 javadoc 0m 27s branch-2.8 passed with JDK v1.8.0_101 +1 javadoc 0m 24s branch-2.8 passed with JDK v1.7.0_111 +1 mvninstall 0m 34s the patch passed +1 compile 0m 35s the patch passed with JDK v1.8.0_101 +1 javac 0m 35s the patch passed +1 compile 0m 31s the patch passed with JDK v1.7.0_111 +1 javac 0m 31s the patch passed +1 checkstyle 0m 15s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 16 unchanged - 3 fixed = 16 total (was 19) +1 mvnsite 0m 36s the patch passed +1 mvneclipse 0m 14s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 26s the patch passed +1 javadoc 0m 23s the patch passed with JDK v1.8.0_101 +1 javadoc 0m 20s the patch passed with JDK v1.7.0_111 -1 unit 70m 10s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_101. -1 unit 71m 2s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_111. +1 asflicense 0m 24s The patch does not generate ASF License warnings. 175m 18s Reason Tests JDK v1.8.0_101 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_111 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:5af2af1 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12829191/YARN-5540-branch-2.8.004.patch JIRA Issue YARN-5540 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 0318c5a46cc2 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2.8 / a1cc90b Default Java 1.7.0_111 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_101 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-YARN-Build/13148/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_101.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/13148/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_111.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/13148/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_101.txt https://builds.apache.org/job/PreCommit-YARN-Build/13148/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_111.txt JDK v1.7.0_111 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13148/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/13148/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
        Hide
        leftnoteasy Wangda Tan added a comment -

        +1 to branch-2.8 patch as well. Thanks Jason Lowe.

        Show
        leftnoteasy Wangda Tan added a comment - +1 to branch-2.8 patch as well. Thanks Jason Lowe .
        Hide
        jlowe Jason Lowe added a comment -

        Thanks for the reviews, Arun Suresh and Wangda Tan! I committed this to trunk, branch-2, branch-2.8, and branch-2.7.

        Show
        jlowe Jason Lowe added a comment - Thanks for the reviews, Arun Suresh and Wangda Tan ! I committed this to trunk, branch-2, branch-2.8, and branch-2.7.
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10461 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10461/)
        YARN-5540. Scheduler spends too much time looking at empty priorities. (jlowe: rev 7558dbbb481eab055e794beb3603bbe5671a4b4c)

        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestAppSchedulingInfo.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10461 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10461/ ) YARN-5540 . Scheduler spends too much time looking at empty priorities. (jlowe: rev 7558dbbb481eab055e794beb3603bbe5671a4b4c) (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestAppSchedulingInfo.java

          People

          • Assignee:
            jlowe Jason Lowe
            Reporter:
            nroberts Nathan Roberts
          • Votes:
            0 Vote for this issue
            Watchers:
            15 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development