Details

    • Type: Sub-task
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: resourcemanager
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      • When queue has * as accessibility, then the queue ordering was not happening properly.

      Few Small nits

      • In AppSchedulingInfo comparator field doesn't have generics
      • TestNodeLabelContainerAllocation.testResourceRequestUpdateNodePartitions has unused variable
      1. YARN-4557.v1.001.patch
        6 kB
        Naganarasimha G R
      2. YARN-4557.v2.001.patch
        10 kB
        Naganarasimha G R
      3. YARN-4557.v2.002.patch
        16 kB
        Naganarasimha G R
      4. YARN-4557.v3.001.patch
        12 kB
        Naganarasimha G R
      5. YARN-4557.v3.002.patch
        12 kB
        Naganarasimha G R

        Activity

        Hide
        Naganarasimha Naganarasimha G R added a comment -

        Along with it there were two minor issues:

        • In AppSchedulingInfo comparator field doesn't have generics
        • TestNodeLabelContainerAllocation.testResourceRequestUpdateNodePartitions has unused variable

        Will fix above two also...

        Show
        Naganarasimha Naganarasimha G R added a comment - Along with it there were two minor issues: In AppSchedulingInfo comparator field doesn't have generics TestNodeLabelContainerAllocation.testResourceRequestUpdateNodePartitions has unused variable Will fix above two also...
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        Fixing the above issues !

        Show
        Naganarasimha Naganarasimha G R added a comment - Fixing the above issues !
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        attaching patch with fix for issue 2 too

        Show
        Naganarasimha Naganarasimha G R added a comment - attaching patch with fix for issue 2 too
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 7m 37s trunk passed
        +1 compile 0m 27s trunk passed with JDK v1.8.0_66
        +1 compile 0m 31s trunk passed with JDK v1.7.0_91
        +1 checkstyle 0m 13s trunk passed
        +1 mvnsite 0m 37s trunk passed
        +1 mvneclipse 0m 15s trunk passed
        +1 findbugs 1m 11s trunk passed
        +1 javadoc 0m 21s trunk passed with JDK v1.8.0_66
        +1 javadoc 0m 26s trunk passed with JDK v1.7.0_91
        +1 mvninstall 0m 31s the patch passed
        +1 compile 0m 23s the patch passed with JDK v1.8.0_66
        +1 javac 0m 23s the patch passed
        +1 compile 0m 28s the patch passed with JDK v1.7.0_91
        +1 javac 0m 28s the patch passed
        +1 checkstyle 0m 13s the patch passed
        +1 mvnsite 0m 34s the patch passed
        +1 mvneclipse 0m 12s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        +1 findbugs 1m 18s the patch passed
        +1 javadoc 0m 19s the patch passed with JDK v1.8.0_66
        +1 javadoc 0m 24s the patch passed with JDK v1.7.0_91
        -1 unit 59m 29s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
        -1 unit 60m 25s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
        +1 asflicense 0m 17s Patch does not generate ASF License warnings.
        137m 20s



        Reason Tests
        JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
        JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:0ca8df7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12780987/YARN-4557.v2.001.patch
        JIRA Issue YARN-4557
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux c11720df4d00 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 172d078
        Default Java 1.7.0_91
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-YARN-Build/10191/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/10191/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10191/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10191/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10191/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Max memory used 75MB
        Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/10191/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 7m 37s trunk passed +1 compile 0m 27s trunk passed with JDK v1.8.0_66 +1 compile 0m 31s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 13s trunk passed +1 mvnsite 0m 37s trunk passed +1 mvneclipse 0m 15s trunk passed +1 findbugs 1m 11s trunk passed +1 javadoc 0m 21s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 26s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 31s the patch passed +1 compile 0m 23s the patch passed with JDK v1.8.0_66 +1 javac 0m 23s the patch passed +1 compile 0m 28s the patch passed with JDK v1.7.0_91 +1 javac 0m 28s the patch passed +1 checkstyle 0m 13s the patch passed +1 mvnsite 0m 34s the patch passed +1 mvneclipse 0m 12s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 18s the patch passed +1 javadoc 0m 19s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 24s the patch passed with JDK v1.7.0_91 -1 unit 59m 29s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 60m 25s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 17s Patch does not generate ASF License warnings. 137m 20s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12780987/YARN-4557.v2.001.patch JIRA Issue YARN-4557 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux c11720df4d00 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 172d078 Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-YARN-Build/10191/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10191/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10191/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10191/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10191/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 75MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/10191/console This message was automatically generated.
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        TestNodeLabelContainerAllocation was failing because test case was not considering the inheriting of the accessible node labels from parent and also added a new test case class for PartitionedQueueComparator,
        Tan, Wangda, can you take a look at this jira and the latest patch ?

        Show
        Naganarasimha Naganarasimha G R added a comment - TestNodeLabelContainerAllocation was failing because test case was not considering the inheriting of the accessible node labels from parent and also added a new test case class for PartitionedQueueComparator , Tan, Wangda , can you take a look at this jira and the latest patch ?
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
        +1 mvninstall 7m 45s trunk passed
        +1 compile 0m 30s trunk passed with JDK v1.8.0_66
        +1 compile 0m 31s trunk passed with JDK v1.7.0_91
        +1 checkstyle 0m 17s trunk passed
        +1 mvnsite 0m 41s trunk passed
        +1 mvneclipse 0m 17s trunk passed
        +1 findbugs 1m 24s trunk passed
        -1 javadoc 0m 27s hadoop-yarn-server-resourcemanager in trunk failed with JDK v1.8.0_66.
        +1 javadoc 0m 32s trunk passed with JDK v1.7.0_91
        +1 mvninstall 0m 37s the patch passed
        +1 compile 0m 28s the patch passed with JDK v1.8.0_66
        +1 javac 0m 28s the patch passed
        +1 compile 0m 30s the patch passed with JDK v1.7.0_91
        +1 javac 0m 30s the patch passed
        +1 checkstyle 0m 15s the patch passed
        +1 mvnsite 0m 35s the patch passed
        +1 mvneclipse 0m 14s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        +1 findbugs 1m 25s the patch passed
        -1 javadoc 0m 21s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
        +1 javadoc 0m 26s the patch passed with JDK v1.7.0_91
        -1 unit 61m 23s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
        -1 unit 61m 3s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
        +1 asflicense 0m 19s Patch does not generate ASF License warnings.
        141m 12s



        Reason Tests
        JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
          hadoop.yarn.server.resourcemanager.TestClientRMTokens
        JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
          hadoop.yarn.server.resourcemanager.TestClientRMTokens



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:0ca8df7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12781249/YARN-4557.v2.002.patch
        JIRA Issue YARN-4557
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 9de4f9bac6be 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 38c4c14
        Default Java 1.7.0_91
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
        findbugs v3.0.0
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/10206/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/10206/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/10206/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/10206/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10206/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10206/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10206/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Max memory used 76MB
        Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/10206/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 7m 45s trunk passed +1 compile 0m 30s trunk passed with JDK v1.8.0_66 +1 compile 0m 31s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 17s trunk passed +1 mvnsite 0m 41s trunk passed +1 mvneclipse 0m 17s trunk passed +1 findbugs 1m 24s trunk passed -1 javadoc 0m 27s hadoop-yarn-server-resourcemanager in trunk failed with JDK v1.8.0_66. +1 javadoc 0m 32s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 37s the patch passed +1 compile 0m 28s the patch passed with JDK v1.8.0_66 +1 javac 0m 28s the patch passed +1 compile 0m 30s the patch passed with JDK v1.7.0_91 +1 javac 0m 30s the patch passed +1 checkstyle 0m 15s the patch passed +1 mvnsite 0m 35s the patch passed +1 mvneclipse 0m 14s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 25s the patch passed -1 javadoc 0m 21s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. +1 javadoc 0m 26s the patch passed with JDK v1.7.0_91 -1 unit 61m 23s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 61m 3s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 19s Patch does not generate ASF License warnings. 141m 12s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.TestClientRMTokens JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.TestClientRMTokens Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12781249/YARN-4557.v2.002.patch JIRA Issue YARN-4557 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 9de4f9bac6be 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 38c4c14 Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 javadoc https://builds.apache.org/job/PreCommit-YARN-Build/10206/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/10206/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10206/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10206/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10206/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10206/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10206/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 76MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/10206/console This message was automatically generated.
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        Hi Tan, Wangda,
        How about the issues mentioned and the patch ?

        Show
        Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda , How about the issues mentioned and the patch ?
        Hide
        leftnoteasy Wangda Tan added a comment -

        Thanks Naganarasimha G R,

        Changes of PartitionedQueueComparator looks good, changes of RegularContainerAllocator may be not correct:
        Existing logic of non-exclusive partition allocation is:

        • For a node heartbeat, try to run exclusive allocation allocation first
        • If doesn't get any container allocated, try to run non-exclusive allocation *for all apps/priorities".

        If we allow allocate lower priority container first, priority inversion problem could happen.

        Thoughts?

        Show
        leftnoteasy Wangda Tan added a comment - Thanks Naganarasimha G R , Changes of PartitionedQueueComparator looks good, changes of RegularContainerAllocator may be not correct: Existing logic of non-exclusive partition allocation is: For a node heartbeat, try to run exclusive allocation allocation first If doesn't get any container allocated, try to run non-exclusive allocation *for all apps/priorities". If we allow allocate lower priority container first, priority inversion problem could happen. Thoughts?
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        Thanks for the comments Tan, Wangda,

        but few concerns here :

        if (anyRequest.getNodeLabelExpression()
                .equals(RMNodeLabelsManager.NO_LABEL)) {
              missedNonPartitionedRequestSchedulingOpportunity =
                  application
                      .addMissedNonPartitionedRequestSchedulingOpportunity(priority);
            }
        
        1. why in the above code application we are storing the missed Missed NonPartitioned Request Scheduling Opportunity for each priority then ? should it not be per app ?
        2. As per existing logic Priority inversion happens during following scenarios too
        • Locality doesn't get matched for the given priority
        • Node Labels are different for the different priority and may be resource availability for these labels are also different.

        Thoughts?

        Show
        Naganarasimha Naganarasimha G R added a comment - Thanks for the comments Tan, Wangda , but few concerns here : if (anyRequest.getNodeLabelExpression() .equals(RMNodeLabelsManager.NO_LABEL)) { missedNonPartitionedRequestSchedulingOpportunity = application .addMissedNonPartitionedRequestSchedulingOpportunity(priority); } why in the above code application we are storing the missed Missed NonPartitioned Request Scheduling Opportunity for each priority then ? should it not be per app ? As per existing logic Priority inversion happens during following scenarios too Locality doesn't get matched for the given priority Node Labels are different for the different priority and may be resource availability for these labels are also different. Thoughts?
        Hide
        leftnoteasy Wangda Tan added a comment -

        why in the above code application we are storing the missed Missed NonPartitioned Request Scheduling Opportunity for each priority then ? should it not be per app ?

        This is as same as we store missed opportunity of delayed scheduling. Different priority could have different requests.

        Locality doesn't get matched for the given priority

        This cannot happen, see following code it:

              // When a returned allocation is LOCALITY_SKIPPED, since we're in
              // off-switch request now, we will skip this app w.r.t priorities 
              if (allocation.state == AllocationState.LOCALITY_SKIPPED) {
                allocation.state = AllocationState.APP_SKIPPED;
              }
        

        Node Labels are different for the different priority and may be resource availability for these labels are also different.

        This is "cannot use" and non-exclusive delay is "cannot be satisfied currently"

        Show
        leftnoteasy Wangda Tan added a comment - why in the above code application we are storing the missed Missed NonPartitioned Request Scheduling Opportunity for each priority then ? should it not be per app ? This is as same as we store missed opportunity of delayed scheduling. Different priority could have different requests. Locality doesn't get matched for the given priority This cannot happen, see following code it: // When a returned allocation is LOCALITY_SKIPPED, since we're in // off- switch request now, we will skip this app w.r.t priorities if (allocation.state == AllocationState.LOCALITY_SKIPPED) { allocation.state = AllocationState.APP_SKIPPED; } Node Labels are different for the different priority and may be resource availability for these labels are also different. This is "cannot use" and non-exclusive delay is "cannot be satisfied currently"
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        Hi Tan, Wangda,
        Thanks for patiently answering my queries, I still have few doubts :

        This is as same as we store missed opportunity of delayed scheduling. Different priority could have different requests.

        I am little confused here if different priority have different requests why to treat them differently when assigning Ignore partition mode. consider an example :
        In a cluster of size 10,
        Assume app has initially requested for Priority 20, #containers 1, mem 8gb, label = default ,mNPRSO = 6
        mNPRSO => missedNonPartitionedRequestSchedulingOpportunity
        now it additionally requests for Priority 10, #containers 1, mem 8gb, label = default ,mNPRSO = 0
        now may be after 10 NonExclusive nodes HB if container gets assigned for priority 10 then mNPRSO for req with Priority 20 starts from where it had left off i.e. 6 , should it not be from 0 ?

        consider the reverse case where app initially requests Priority 10, #containers 1, mem 8gb, label = default ,mNPRSO = 5
        additionally requests Priority 20, #containers 1, mem 8gb, label = default ,mNPRSO = 0 then if priority 10 is assigned after 5 more NonExclusive nodes HB only then mNPRSO for priority 20 is started.
        So felt this is not correct and better to consider missedNonPartitionedRequestSchedulingOpportunity for app as whole or consider it individually for each priority and return AllocationState.APP_SKIPPED !

        This cannot happen, see following code it: if (allocation.state == AllocationState.LOCALITY_SKIPPED) .....

        Thanks had missed observing this part of the code, but consider when ResourceRequest.getRelaxLocality is false then
        RegularContainerAllocator.assignContainersOnNode(...) returns PRIORITY_SKIPPED hence is there a chance for priority inversion ?

        This is "cannot use" and non-exclusive delay is "cannot be satisfied currently"

        IIUC you are indicating that RR's with diff priorities but for the same partition then priority inversion should not happen ?

        Show
        Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda , Thanks for patiently answering my queries, I still have few doubts : This is as same as we store missed opportunity of delayed scheduling. Different priority could have different requests. I am little confused here if different priority have different requests why to treat them differently when assigning Ignore partition mode. consider an example : In a cluster of size 10 , Assume app has initially requested for Priority 20, #containers 1, mem 8gb, label = default ,mNPRSO = 6 mNPRSO => missedNonPartitionedRequestSchedulingOpportunity now it additionally requests for Priority 10, #containers 1, mem 8gb, label = default ,mNPRSO = 0 now may be after 10 NonExclusive nodes HB if container gets assigned for priority 10 then mNPRSO for req with Priority 20 starts from where it had left off i.e. 6 , should it not be from 0 ? consider the reverse case where app initially requests Priority 10, #containers 1, mem 8gb, label = default ,mNPRSO = 5 additionally requests Priority 20, #containers 1, mem 8gb, label = default ,mNPRSO = 0 then if priority 10 is assigned after 5 more NonExclusive nodes HB only then mNPRSO for priority 20 is started. So felt this is not correct and better to consider missedNonPartitionedRequestSchedulingOpportunity for app as whole or consider it individually for each priority and return AllocationState.APP_SKIPPED ! This cannot happen, see following code it: if (allocation.state == AllocationState.LOCALITY_SKIPPED) ..... Thanks had missed observing this part of the code, but consider when ResourceRequest.getRelaxLocality is false then RegularContainerAllocator.assignContainersOnNode(...) returns PRIORITY_SKIPPED hence is there a chance for priority inversion ? This is "cannot use" and non-exclusive delay is "cannot be satisfied currently" IIUC you are indicating that RR's with diff priorities but for the same partition then priority inversion should not happen ?
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        and return AllocationState.APP_SKIPPED => and not return AllocationState.APP_SKIPPED

        Show
        Naganarasimha Naganarasimha G R added a comment - and return AllocationState.APP_SKIPPED => and not return AllocationState.APP_SKIPPED
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        Hi Tan, Wangda,
        Any thoughts on my previous comment ?
        If not required i can reword the descritption to consider only the first issue and rework on the patch too!

        Show
        Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda , Any thoughts on my previous comment ? If not required i can reword the descritption to consider only the first issue and rework on the patch too!
        Hide
        leftnoteasy Wangda Tan added a comment -

        Hi Naganarasimha G R,

        Thanks for comments, apologize for the delays,

        now may be after 10 NonExclusive nodes HB if container gets assigned for priority 10 then mNPRSO for req with Priority 20 starts from where it had left off i.e. 6 , should it not be from 0 ?

        It's a valid concern, but I think it's a corner case:

        • It's only valid when resources of different priorities are same.
        • Example in your comment (requesting higher priority when it has some pending lower priority container) is not as frequency as normal container request.
        • The worst case is waiting for a node locality delay, not very bad.

        I can understand there're some issues in our existing approach to handle locality delay with priority, this is why I filed YARN-4189. I would not prefer to add additional complexity/behavior change to existing delay scheduling mechanism unless it's critical (e.g. YARN-4287).

        RegularContainerAllocator.assignContainersOnNode(...) returns PRIORITY_SKIPPED hence is there a chance for priority inversion ?

        To me, if a request cannot be satisfied because of hard restrictions (e.g. partition/hard-locality), we should give chance to lower priorities in existing delay scheduling implementation.
        You can take a look at YARN-4189 design doc, I have listed existing issues that delay scheduling could cause priority inversion. I think these issues cannot be resolved in a easy way.

        Show
        leftnoteasy Wangda Tan added a comment - Hi Naganarasimha G R , Thanks for comments, apologize for the delays, now may be after 10 NonExclusive nodes HB if container gets assigned for priority 10 then mNPRSO for req with Priority 20 starts from where it had left off i.e. 6 , should it not be from 0 ? It's a valid concern, but I think it's a corner case: It's only valid when resources of different priorities are same. Example in your comment (requesting higher priority when it has some pending lower priority container) is not as frequency as normal container request. The worst case is waiting for a node locality delay, not very bad. I can understand there're some issues in our existing approach to handle locality delay with priority, this is why I filed YARN-4189 . I would not prefer to add additional complexity/behavior change to existing delay scheduling mechanism unless it's critical (e.g. YARN-4287 ). RegularContainerAllocator.assignContainersOnNode(...) returns PRIORITY_SKIPPED hence is there a chance for priority inversion ? To me, if a request cannot be satisfied because of hard restrictions (e.g. partition/hard-locality), we should give chance to lower priorities in existing delay scheduling implementation . You can take a look at YARN-4189 design doc, I have listed existing issues that delay scheduling could cause priority inversion. I think these issues cannot be resolved in a easy way.
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        Thanks for the comments Tan, Wangda,
        I agree the frequency of the scenario which i mentioned is very low. I presume now i can consider issue2 as not to fix and rework on the patch to consider only issue 1

        Show
        Naganarasimha Naganarasimha G R added a comment - Thanks for the comments Tan, Wangda , I agree the frequency of the scenario which i mentioned is very low. I presume now i can consider issue2 as not to fix and rework on the patch to consider only issue 1
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        Hi Tan, Wangda,
        Have removed modifications for

        When app has submitted requests for multiple priority in default partition, then if one of the priority requests has missed non-partitioned-resource-request equivalent to cluster size then container needs to be allocated. Currently if the higher priority requests doesn't satisfy the condition, then whole application is getting skipped instead the priority

        Please check whether latest patch is fine

        Show
        Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda , Have removed modifications for When app has submitted requests for multiple priority in default partition, then if one of the priority requests has missed non-partitioned-resource-request equivalent to cluster size then container needs to be allocated. Currently if the higher priority requests doesn't satisfy the condition, then whole application is getting skipped instead the priority Please check whether latest patch is fine
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
        +1 mvninstall 7m 58s trunk passed
        +1 compile 0m 25s trunk passed with JDK v1.8.0_66
        +1 compile 0m 31s trunk passed with JDK v1.7.0_91
        +1 checkstyle 0m 13s trunk passed
        +1 mvnsite 0m 36s trunk passed
        +1 mvneclipse 0m 16s trunk passed
        +1 findbugs 1m 12s trunk passed
        +1 javadoc 0m 21s trunk passed with JDK v1.8.0_66
        +1 javadoc 0m 27s trunk passed with JDK v1.7.0_91
        +1 mvninstall 0m 31s the patch passed
        +1 compile 0m 24s the patch passed with JDK v1.8.0_66
        +1 javac 0m 24s the patch passed
        +1 compile 0m 28s the patch passed with JDK v1.7.0_91
        +1 javac 0m 28s the patch passed
        +1 checkstyle 0m 13s the patch passed
        +1 mvnsite 0m 35s the patch passed
        +1 mvneclipse 0m 12s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        +1 findbugs 1m 19s the patch passed
        +1 javadoc 0m 19s the patch passed with JDK v1.8.0_66
        +1 javadoc 0m 24s the patch passed with JDK v1.7.0_91
        -1 unit 65m 52s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
        -1 unit 66m 33s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
        +1 asflicense 0m 20s Patch does not generate ASF License warnings.
        150m 17s



        Reason Tests
        JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.TestAMAuthorization
        JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.TestAMAuthorization



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:0ca8df7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12782888/YARN-4557.v3.001.patch
        JIRA Issue YARN-4557
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 0752a41988f5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / d40859f
        Default Java 1.7.0_91
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-YARN-Build/10319/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/10319/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10319/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10319/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10319/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Max memory used 77MB
        Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/10319/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 7m 58s trunk passed +1 compile 0m 25s trunk passed with JDK v1.8.0_66 +1 compile 0m 31s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 13s trunk passed +1 mvnsite 0m 36s trunk passed +1 mvneclipse 0m 16s trunk passed +1 findbugs 1m 12s trunk passed +1 javadoc 0m 21s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 27s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 31s the patch passed +1 compile 0m 24s the patch passed with JDK v1.8.0_66 +1 javac 0m 24s the patch passed +1 compile 0m 28s the patch passed with JDK v1.7.0_91 +1 javac 0m 28s the patch passed +1 checkstyle 0m 13s the patch passed +1 mvnsite 0m 35s the patch passed +1 mvneclipse 0m 12s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 19s the patch passed +1 javadoc 0m 19s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 24s the patch passed with JDK v1.7.0_91 -1 unit 65m 52s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 66m 33s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 20s Patch does not generate ASF License warnings. 150m 17s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12782888/YARN-4557.v3.001.patch JIRA Issue YARN-4557 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 0752a41988f5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / d40859f Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-YARN-Build/10319/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10319/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10319/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10319/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10319/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 77MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/10319/console This message was automatically generated.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Looks good, +1. Thanks Naganarasimha G R,

        One nit is:

        1671	    // Test case 1
        1672	    // Both a/b has used_capacity(x) = 0, when doing exclusive allocation, b
        1673	    // will go first since b has more capacity(x)
        

        It should be "a should go first" according to your test case.

        Show
        leftnoteasy Wangda Tan added a comment - Looks good, +1. Thanks Naganarasimha G R , One nit is: 1671 // Test case 1 1672 // Both a/b has used_capacity(x) = 0, when doing exclusive allocation, b 1673 // will go first since b has more capacity(x) It should be "a should go first" according to your test case.
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        Thanks for the review Tan, Wangda,
        yes it was a typo, have corrected it in the latest patch

        Show
        Naganarasimha Naganarasimha G R added a comment - Thanks for the review Tan, Wangda , yes it was a typo, have corrected it in the latest patch
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
        +1 mvninstall 10m 40s trunk passed
        +1 compile 0m 47s trunk passed with JDK v1.8.0_66
        +1 compile 0m 43s trunk passed with JDK v1.7.0_91
        +1 checkstyle 0m 17s trunk passed
        +1 mvnsite 0m 51s trunk passed
        +1 mvneclipse 0m 21s trunk passed
        +1 findbugs 1m 39s trunk passed
        +1 javadoc 0m 38s trunk passed with JDK v1.8.0_66
        +1 javadoc 0m 39s trunk passed with JDK v1.7.0_91
        +1 mvninstall 0m 44s the patch passed
        +1 compile 0m 45s the patch passed with JDK v1.8.0_66
        +1 javac 0m 45s the patch passed
        +1 compile 0m 39s the patch passed with JDK v1.7.0_91
        +1 javac 0m 39s the patch passed
        +1 checkstyle 0m 17s the patch passed
        +1 mvnsite 0m 46s the patch passed
        +1 mvneclipse 0m 18s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        +1 findbugs 1m 52s the patch passed
        +1 javadoc 0m 34s the patch passed with JDK v1.8.0_66
        +1 javadoc 0m 35s the patch passed with JDK v1.7.0_91
        -1 unit 70m 41s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
        -1 unit 68m 9s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
        +1 asflicense 0m 24s Patch does not generate ASF License warnings.
        164m 9s



        Reason Tests
        JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority
          hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.TestAMAuthorization
        JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.TestAMAuthorization



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:0ca8df7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12783012/YARN-4557.v3.002.patch
        JIRA Issue YARN-4557
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux a03c3e9155cc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / edc43a9
        Default Java 1.7.0_91
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-YARN-Build/10326/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/10326/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10326/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10326/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10326/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Max memory used 77MB
        Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/10326/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 10m 40s trunk passed +1 compile 0m 47s trunk passed with JDK v1.8.0_66 +1 compile 0m 43s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 17s trunk passed +1 mvnsite 0m 51s trunk passed +1 mvneclipse 0m 21s trunk passed +1 findbugs 1m 39s trunk passed +1 javadoc 0m 38s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 39s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 44s the patch passed +1 compile 0m 45s the patch passed with JDK v1.8.0_66 +1 javac 0m 45s the patch passed +1 compile 0m 39s the patch passed with JDK v1.7.0_91 +1 javac 0m 39s the patch passed +1 checkstyle 0m 17s the patch passed +1 mvnsite 0m 46s the patch passed +1 mvneclipse 0m 18s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 52s the patch passed +1 javadoc 0m 34s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 35s the patch passed with JDK v1.7.0_91 -1 unit 70m 41s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 68m 9s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 24s Patch does not generate ASF License warnings. 164m 9s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12783012/YARN-4557.v3.002.patch JIRA Issue YARN-4557 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux a03c3e9155cc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / edc43a9 Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-YARN-Build/10326/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10326/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10326/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10326/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10326/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 77MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/10326/console This message was automatically generated.
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        TestApplicationPriority is failing not because of the modifications in the patch YARN-4614 has been raised for the same .

        Show
        Naganarasimha Naganarasimha G R added a comment - TestApplicationPriority is failing not because of the modifications in the patch YARN-4614 has been raised for the same .
        Hide
        leftnoteasy Wangda Tan added a comment -

        Committed to trunk/branch-2/2.8, thanks Naganarasimha G R!

        Show
        leftnoteasy Wangda Tan added a comment - Committed to trunk/branch-2/2.8, thanks Naganarasimha G R !
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-trunk-Commit #9145 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9145/)
        YARN-4557. Fix improper Queues sorting in PartitionedQueueComparator (wangda: rev 5ff5f67332b527acaca7a69ac421930a02ca55b3)

        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
        • hadoop-yarn-project/CHANGES.txt
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PartitionedQueueComparator.java
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestNodeLabelContainerAllocation.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9145 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9145/ ) YARN-4557 . Fix improper Queues sorting in PartitionedQueueComparator (wangda: rev 5ff5f67332b527acaca7a69ac421930a02ca55b3) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PartitionedQueueComparator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestNodeLabelContainerAllocation.java
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        Thanks for the review and the commit Tan, Wangda.

        Show
        Naganarasimha Naganarasimha G R added a comment - Thanks for the review and the commit Tan, Wangda .

          People

          • Assignee:
            Naganarasimha Naganarasimha G R
            Reporter:
            Naganarasimha Naganarasimha G R
          • Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development