Details

    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      We don't have node label usage in RM CapacityScheduler web UI now, without this, user will be hard to understand what happened to nodes have labels assign to it.

      1. 2015.05.06 Folded Queues.png
        53 kB
        Wangda Tan
      2. 2015.05.06 Queue Expanded.png
        218 kB
        Wangda Tan
      3. 2015.05.07_3362_Queue_Hierarchy.png
        237 kB
        Naganarasimha G R
      4. 2015.05.10_3362_Queue_Hierarchy.png
        380 kB
        Naganarasimha G R
      5. 2015.05.12_3362_Queue_Hierarchy.png
        395 kB
        Naganarasimha G R
      6. AppInLabelXnoStatsInSchedPage.png
        291 kB
        Naganarasimha G R
      7. capacity-scheduler.xml
        4 kB
        Wangda Tan
      8. CSWithLabelsView.png
        249 kB
        Naganarasimha G R
      9. No-space-between-Active_user_info-and-next-queues.png
        14 kB
        Wangda Tan
      10. Screen Shot 2015-04-29 at 11.42.17 AM.png
        202 kB
        Wangda Tan
      11. YARN-3362.20150428-3.patch
        15 kB
        Naganarasimha G R
      12. YARN-3362.20150428-3-modified.patch
        16 kB
        Wangda Tan
      13. YARN-3362.20150506-1.patch
        18 kB
        Naganarasimha G R
      14. YARN-3362.20150507-1.patch
        25 kB
        Naganarasimha G R
      15. YARN-3362.20150510-1.patch
        26 kB
        Naganarasimha G R
      16. YARN-3362.20150511-1.patch
        26 kB
        Naganarasimha G R
      17. YARN-3362.20150512-1.patch
        26 kB
        Naganarasimha G R
      18. YARN-3362-branch-2.7.002.patch
        26 kB
        Naganarasimha G R
      19. YARN-3362-branch-2.7.003.patch
        27 kB
        Eric Payne
      20. YARN-3362-branch-2.7.004.patch
        27 kB
        Eric Payne

        Issue Links

          Activity

          Hide
          vinodkv Vinod Kumar Vavilapalli added a comment -

          Closing the JIRA as part of 2.7.3 release.

          Show
          vinodkv Vinod Kumar Vavilapalli added a comment - Closing the JIRA as part of 2.7.3 release.
          Hide
          eepayne Eric Payne added a comment -

          Changing Fix Version to 2.7.3 since branch 2.7.3 has not yet been created.

          Show
          eepayne Eric Payne added a comment - Changing Fix Version to 2.7.3 since branch 2.7.3 has not yet been created.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          YARN-4751 has been checked in hence now label UI will be available in 2.7 too

          Show
          Naganarasimha Naganarasimha G R added a comment - YARN-4751 has been checked in hence now label UI will be available in 2.7 too
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          will close this jira once YARN-4751 is committed and closed...

          Show
          Naganarasimha Naganarasimha G R added a comment - will close this jira once YARN-4751 is committed and closed...
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Have committed the patch to 2.7 branch, Thanks Eric Payne, for working on 2.7 patch. Thanks Sunil G and Tan, Wangda for the review.
          Also we need to push YARN-4751 asap, as current patch is not complete without it

          Show
          Naganarasimha Naganarasimha G R added a comment - Have committed the patch to 2.7 branch, Thanks Eric Payne , for working on 2.7 patch. Thanks Sunil G and Tan, Wangda for the review. Also we need to push YARN-4751 asap, as current patch is not complete without it
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Eric Payne, As in earlier patches findbugs and test case are not related to the patch, latest patch LGTM. committing it shortly !

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Eric Payne , As in earlier patches findbugs and test case are not related to the patch, latest patch LGTM. committing it shortly !
          Hide
          eepayne Eric Payne added a comment -

          Thanks, Naganarasimha Garla. Have you been able to look at this latest patch?

          Show
          eepayne Eric Payne added a comment - Thanks, Naganarasimha Garla . Have you been able to look at this latest patch?
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 14m 30s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 5m 54s branch-2.7 passed
          +1 compile 0m 23s branch-2.7 passed with JDK v1.8.0_91
          +1 compile 0m 26s branch-2.7 passed with JDK v1.7.0_101
          +1 checkstyle 0m 27s branch-2.7 passed
          +1 mvnsite 0m 32s branch-2.7 passed
          +1 mvneclipse 0m 14s branch-2.7 passed
          -1 findbugs 1m 2s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager in branch-2.7 has 1 extant Findbugs warnings.
          +1 javadoc 0m 19s branch-2.7 passed with JDK v1.8.0_91
          +1 javadoc 0m 23s branch-2.7 passed with JDK v1.7.0_101
          +1 mvninstall 0m 27s the patch passed
          +1 compile 0m 21s the patch passed with JDK v1.8.0_91
          +1 javac 0m 21s the patch passed
          +1 compile 0m 25s the patch passed with JDK v1.7.0_101
          +1 javac 0m 25s the patch passed
          -1 checkstyle 0m 23s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: patch generated 50 new + 687 unchanged - 43 fixed = 737 total (was 730)
          +1 mvnsite 0m 31s the patch passed
          +1 mvneclipse 0m 11s the patch passed
          -1 whitespace 0m 0s The patch has 3610 line(s) that end in whitespace. Use git apply --whitespace=fix.
          -1 whitespace 1m 22s The patch has 497 line(s) with tabs.
          +1 findbugs 1m 13s the patch passed
          +1 javadoc 0m 16s the patch passed with JDK v1.8.0_91
          +1 javadoc 0m 21s the patch passed with JDK v1.7.0_101
          -1 unit 49m 42s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_91.
          -1 unit 50m 14s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_101.
          +1 asflicense 0m 16s Patch does not generate ASF License warnings.
          131m 19s



          Reason Tests
          JDK v1.8.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.TestClientRMTokens
          JDK v1.7.0_101 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.TestClientRMTokens



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:c420dfe
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12803004/YARN-3362-branch-2.7.004.patch
          JIRA Issue YARN-3362
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 09b10c34372c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision branch-2.7 / b8e01da
          Default Java 1.7.0_101
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-YARN-Build/11382/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/11382/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/11382/artifact/patchprocess/whitespace-eol.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/11382/artifact/patchprocess/whitespace-tabs.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/11382/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_91.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/11382/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_101.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/11382/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_91.txt https://builds.apache.org/job/PreCommit-YARN-Build/11382/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_101.txt
          JDK v1.7.0_101 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/11382/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/11382/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 14m 30s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 5m 54s branch-2.7 passed +1 compile 0m 23s branch-2.7 passed with JDK v1.8.0_91 +1 compile 0m 26s branch-2.7 passed with JDK v1.7.0_101 +1 checkstyle 0m 27s branch-2.7 passed +1 mvnsite 0m 32s branch-2.7 passed +1 mvneclipse 0m 14s branch-2.7 passed -1 findbugs 1m 2s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager in branch-2.7 has 1 extant Findbugs warnings. +1 javadoc 0m 19s branch-2.7 passed with JDK v1.8.0_91 +1 javadoc 0m 23s branch-2.7 passed with JDK v1.7.0_101 +1 mvninstall 0m 27s the patch passed +1 compile 0m 21s the patch passed with JDK v1.8.0_91 +1 javac 0m 21s the patch passed +1 compile 0m 25s the patch passed with JDK v1.7.0_101 +1 javac 0m 25s the patch passed -1 checkstyle 0m 23s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: patch generated 50 new + 687 unchanged - 43 fixed = 737 total (was 730) +1 mvnsite 0m 31s the patch passed +1 mvneclipse 0m 11s the patch passed -1 whitespace 0m 0s The patch has 3610 line(s) that end in whitespace. Use git apply --whitespace=fix. -1 whitespace 1m 22s The patch has 497 line(s) with tabs. +1 findbugs 1m 13s the patch passed +1 javadoc 0m 16s the patch passed with JDK v1.8.0_91 +1 javadoc 0m 21s the patch passed with JDK v1.7.0_101 -1 unit 49m 42s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_91. -1 unit 50m 14s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_101. +1 asflicense 0m 16s Patch does not generate ASF License warnings. 131m 19s Reason Tests JDK v1.8.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.TestClientRMTokens JDK v1.7.0_101 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.TestClientRMTokens Subsystem Report/Notes Docker Image:yetus/hadoop:c420dfe JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12803004/YARN-3362-branch-2.7.004.patch JIRA Issue YARN-3362 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 09b10c34372c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2.7 / b8e01da Default Java 1.7.0_101 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-YARN-Build/11382/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/11382/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/11382/artifact/patchprocess/whitespace-eol.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/11382/artifact/patchprocess/whitespace-tabs.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/11382/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_91.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/11382/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_101.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/11382/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_91.txt https://builds.apache.org/job/PreCommit-YARN-Build/11382/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_101.txt JDK v1.7.0_101 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/11382/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/11382/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          naganarasimha_gr@apache.org Naganarasimha Garla added a comment -

          Sure will review as it's also close can commit it too...

          Show
          naganarasimha_gr@apache.org Naganarasimha Garla added a comment - Sure will review as it's also close can commit it too...
          Hide
          eepayne Eric Payne added a comment -

          Naganarasimha G R, attaching YARN-3362-branch-2.7.004.patch with another checkstyle change correcting the order of final and protected.

          Once pre-commit build comes back, can you please review?

          Show
          eepayne Eric Payne added a comment - Naganarasimha G R , attaching YARN-3362 -branch-2.7.004.patch with another checkstyle change correcting the order of final and protected . Once pre-commit build comes back, can you please review?
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 15m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 5m 55s branch-2.7 passed
          +1 compile 0m 24s branch-2.7 passed with JDK v1.8.0_91
          +1 compile 0m 26s branch-2.7 passed with JDK v1.7.0_101
          +1 checkstyle 0m 26s branch-2.7 passed
          +1 mvnsite 0m 32s branch-2.7 passed
          +1 mvneclipse 0m 14s branch-2.7 passed
          -1 findbugs 1m 1s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager in branch-2.7 has 1 extant Findbugs warnings.
          +1 javadoc 0m 19s branch-2.7 passed with JDK v1.8.0_91
          +1 javadoc 0m 23s branch-2.7 passed with JDK v1.7.0_101
          +1 mvninstall 0m 27s the patch passed
          +1 compile 0m 22s the patch passed with JDK v1.8.0_91
          +1 javac 0m 22s the patch passed
          +1 compile 0m 24s the patch passed with JDK v1.7.0_101
          +1 javac 0m 24s the patch passed
          -1 checkstyle 0m 23s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: patch generated 51 new + 688 unchanged - 43 fixed = 739 total (was 731)
          +1 mvnsite 0m 31s the patch passed
          +1 mvneclipse 0m 12s the patch passed
          -1 whitespace 0m 0s The patch has 3261 line(s) that end in whitespace. Use git apply --whitespace=fix.
          -1 whitespace 1m 14s The patch has 497 line(s) with tabs.
          +1 findbugs 1m 11s the patch passed
          +1 javadoc 0m 16s the patch passed with JDK v1.8.0_91
          +1 javadoc 0m 21s the patch passed with JDK v1.7.0_101
          -1 unit 49m 28s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_91.
          -1 unit 50m 33s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_101.
          -1 asflicense 2m 20s Patch generated 61 ASF License warnings.
          134m 3s



          Reason Tests
          JDK v1.8.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
          JDK v1.7.0_101 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:c420dfe
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12802515/YARN-3362-branch-2.7.003.patch
          JIRA Issue YARN-3362
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux d8ec840485c6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision branch-2.7 / 4311e5f
          Default Java 1.7.0_101
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/whitespace-eol.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/whitespace-tabs.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_91.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_101.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_91.txt https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_101.txt
          JDK v1.7.0_101 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/11349/testReport/
          asflicense https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/11349/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 15m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 5m 55s branch-2.7 passed +1 compile 0m 24s branch-2.7 passed with JDK v1.8.0_91 +1 compile 0m 26s branch-2.7 passed with JDK v1.7.0_101 +1 checkstyle 0m 26s branch-2.7 passed +1 mvnsite 0m 32s branch-2.7 passed +1 mvneclipse 0m 14s branch-2.7 passed -1 findbugs 1m 1s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager in branch-2.7 has 1 extant Findbugs warnings. +1 javadoc 0m 19s branch-2.7 passed with JDK v1.8.0_91 +1 javadoc 0m 23s branch-2.7 passed with JDK v1.7.0_101 +1 mvninstall 0m 27s the patch passed +1 compile 0m 22s the patch passed with JDK v1.8.0_91 +1 javac 0m 22s the patch passed +1 compile 0m 24s the patch passed with JDK v1.7.0_101 +1 javac 0m 24s the patch passed -1 checkstyle 0m 23s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: patch generated 51 new + 688 unchanged - 43 fixed = 739 total (was 731) +1 mvnsite 0m 31s the patch passed +1 mvneclipse 0m 12s the patch passed -1 whitespace 0m 0s The patch has 3261 line(s) that end in whitespace. Use git apply --whitespace=fix. -1 whitespace 1m 14s The patch has 497 line(s) with tabs. +1 findbugs 1m 11s the patch passed +1 javadoc 0m 16s the patch passed with JDK v1.8.0_91 +1 javadoc 0m 21s the patch passed with JDK v1.7.0_101 -1 unit 49m 28s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_91. -1 unit 50m 33s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_101. -1 asflicense 2m 20s Patch generated 61 ASF License warnings. 134m 3s Reason Tests JDK v1.8.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_101 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:c420dfe JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12802515/YARN-3362-branch-2.7.003.patch JIRA Issue YARN-3362 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux d8ec840485c6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2.7 / 4311e5f Default Java 1.7.0_101 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/whitespace-eol.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/whitespace-tabs.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_91.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_101.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_91.txt https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_101.txt JDK v1.7.0_101 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/11349/testReport/ asflicense https://builds.apache.org/job/PreCommit-YARN-Build/11349/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/11349/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          eepayne Eric Payne added a comment -

          Naganarasimha G R], uploading YARN-3362-branch-2.7.003.patch

          Most of the checkstyle warnings were from the previous (trunk) patch. However, there were a few that I introduced, so I fixed those. I also fixed some of the other ones including methods and parameters that needed to be final and adding minimal javadocs.

          Show
          eepayne Eric Payne added a comment - Naganarasimha G R ], uploading YARN-3362 -branch-2.7.003.patch Most of the checkstyle warnings were from the previous (trunk) patch. However, there were a few that I introduced, so I fixed those. I also fixed some of the other ones including methods and parameters that needed to be final and adding minimal javadocs.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Eric Payne, Can you have a look at checkstyle, may be only few are rightly related to the patch ?

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Eric Payne , Can you have a look at checkstyle, may be only few are rightly related to the patch ?
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 16m 7s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 6m 5s branch-2.7 passed
          +1 compile 0m 24s branch-2.7 passed with JDK v1.8.0_91
          +1 compile 0m 26s branch-2.7 passed with JDK v1.7.0_95
          +1 checkstyle 0m 28s branch-2.7 passed
          +1 mvnsite 0m 34s branch-2.7 passed
          +1 mvneclipse 0m 15s branch-2.7 passed
          -1 findbugs 1m 0s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager in branch-2.7 has 1 extant Findbugs warnings.
          +1 javadoc 0m 19s branch-2.7 passed with JDK v1.8.0_91
          +1 javadoc 0m 24s branch-2.7 passed with JDK v1.7.0_95
          +1 mvninstall 0m 27s the patch passed
          +1 compile 0m 22s the patch passed with JDK v1.8.0_91
          +1 javac 0m 22s the patch passed
          +1 compile 0m 24s the patch passed with JDK v1.7.0_95
          +1 javac 0m 24s the patch passed
          -1 checkstyle 0m 25s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: patch generated 70 new + 687 unchanged - 43 fixed = 757 total (was 730)
          +1 mvnsite 0m 30s the patch passed
          +1 mvneclipse 0m 12s the patch passed
          -1 whitespace 0m 0s The patch has 2913 line(s) that end in whitespace. Use git apply --whitespace=fix.
          -1 whitespace 1m 12s The patch has 497 line(s) with tabs.
          +1 findbugs 1m 12s the patch passed
          +1 javadoc 0m 18s the patch passed with JDK v1.8.0_91
          +1 javadoc 0m 22s the patch passed with JDK v1.7.0_95
          -1 unit 52m 35s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_91.
          -1 unit 53m 13s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_95.
          -1 asflicense 2m 11s Patch generated 61 ASF License warnings.
          140m 55s



          Reason Tests
          JDK v1.8.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
          JDK v1.7.0_95 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:c420dfe
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12802312/YARN-3362-branch-2.7.002.patch
          JIRA Issue YARN-3362
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux f1c126e21915 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision branch-2.7 / 4311e5f
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/whitespace-eol.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/whitespace-tabs.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_91.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_91.txt https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/11338/testReport/
          asflicense https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/11338/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 16m 7s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 6m 5s branch-2.7 passed +1 compile 0m 24s branch-2.7 passed with JDK v1.8.0_91 +1 compile 0m 26s branch-2.7 passed with JDK v1.7.0_95 +1 checkstyle 0m 28s branch-2.7 passed +1 mvnsite 0m 34s branch-2.7 passed +1 mvneclipse 0m 15s branch-2.7 passed -1 findbugs 1m 0s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager in branch-2.7 has 1 extant Findbugs warnings. +1 javadoc 0m 19s branch-2.7 passed with JDK v1.8.0_91 +1 javadoc 0m 24s branch-2.7 passed with JDK v1.7.0_95 +1 mvninstall 0m 27s the patch passed +1 compile 0m 22s the patch passed with JDK v1.8.0_91 +1 javac 0m 22s the patch passed +1 compile 0m 24s the patch passed with JDK v1.7.0_95 +1 javac 0m 24s the patch passed -1 checkstyle 0m 25s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: patch generated 70 new + 687 unchanged - 43 fixed = 757 total (was 730) +1 mvnsite 0m 30s the patch passed +1 mvneclipse 0m 12s the patch passed -1 whitespace 0m 0s The patch has 2913 line(s) that end in whitespace. Use git apply --whitespace=fix. -1 whitespace 1m 12s The patch has 497 line(s) with tabs. +1 findbugs 1m 12s the patch passed +1 javadoc 0m 18s the patch passed with JDK v1.8.0_91 +1 javadoc 0m 22s the patch passed with JDK v1.7.0_95 -1 unit 52m 35s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_91. -1 unit 53m 13s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_95. -1 asflicense 2m 11s Patch generated 61 ASF License warnings. 140m 55s Reason Tests JDK v1.8.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_95 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:c420dfe JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12802312/YARN-3362-branch-2.7.002.patch JIRA Issue YARN-3362 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux f1c126e21915 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2.7 / 4311e5f Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/whitespace-eol.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/whitespace-tabs.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_91.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_91.txt https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/11338/testReport/ asflicense https://builds.apache.org/job/PreCommit-YARN-Build/11338/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/11338/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks Eric Payne, Reattaching the same file so that jenkins triggers with proper attachment.

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks Eric Payne , Reattaching the same file so that jenkins triggers with proper attachment.
          Hide
          eepayne Eric Payne added a comment -
          Processing: YARN-3362
          YARN-3362 patch is being downloaded at Mon May  2 05:27:05 UTC 2016 from
            https://issues.apache.org/jira/secure/attachment/12799131/AppInLabelXnoStatsInSchedPage.png -> Downloaded
          ERROR: Unsure how to process YARN-3362.
          
          Show
          eepayne Eric Payne added a comment - Processing: YARN-3362 YARN-3362 patch is being downloaded at Mon May 2 05:27:05 UTC 2016 from https://issues.apache.org/jira/secure/attachment/12799131/AppInLabelXnoStatsInSchedPage.png -> Downloaded ERROR: Unsure how to process YARN-3362.
          Hide
          eepayne Eric Payne added a comment -

          Thanks a lot, Naganarasimha G R.

          I don't think the YARN pre-commit build worked. I the following error message in the console output from both of the following builds:
          https://builds.apache.org/job/PreCommit-YARN-Build/11301/console
          https://builds.apache.org/job/PreCommit-YARN-Build/11304/console

          Show
          eepayne Eric Payne added a comment - Thanks a lot, Naganarasimha G R . I don't think the YARN pre-commit build worked. I the following error message in the console output from both of the following builds: https://builds.apache.org/job/PreCommit-YARN-Build/11301/console https://builds.apache.org/job/PreCommit-YARN-Build/11304/console
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          manually started jenkins build as it had not yet started.

          Show
          Naganarasimha Naganarasimha G R added a comment - manually started jenkins build as it had not yet started.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          To backport it to 2.7 branch

          Show
          Naganarasimha Naganarasimha G R added a comment - To backport it to 2.7 branch
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          fine with me.. reopening the ticket and starting the jenkins!

          Show
          Naganarasimha Naganarasimha G R added a comment - fine with me.. reopening the ticket and starting the jenkins!
          Hide
          leftnoteasy Wangda Tan added a comment -

          Naganarasimha G R,

          I would prefer the previous one: reopen this ticket, kick Jenkins and change status of this JIRA. YARN-4751 can be committed separately. Typically we should backport JIRA to older version with same ID, changing ID sometimes causes trouble when doing back port.

          Thoughts?

          Show
          leftnoteasy Wangda Tan added a comment - Naganarasimha G R , I would prefer the previous one: reopen this ticket, kick Jenkins and change status of this JIRA. YARN-4751 can be committed separately. Typically we should backport JIRA to older version with same ID, changing ID sometimes causes trouble when doing back port. Thoughts?
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Sorry for the long delay, I verified the patch in the order you have mentioned, i built and crossverified the patch on 2.7 and i am fine with the modifications too.
          Just one query is how to go about with the merge? would it be good to reopen this jira and change the state to patch available to so that the jenkins is triggered or merge these changes with YARN-4751. I would prefer the later , thoughts ?
          cc / Tan, Wangda

          Show
          Naganarasimha Naganarasimha G R added a comment - Sorry for the long delay, I verified the patch in the order you have mentioned, i built and crossverified the patch on 2.7 and i am fine with the modifications too. Just one query is how to go about with the merge? would it be good to reopen this jira and change the state to patch available to so that the jenkins is triggered or merge these changes with YARN-4751 . I would prefer the later , thoughts ? cc / Tan, Wangda
          Hide
          eepayne Eric Payne added a comment -

          Naganarasimha Garla, is there any update?

          Show
          eepayne Eric Payne added a comment - Naganarasimha Garla , is there any update?
          Hide
          naganarasimha_gr@apache.org Naganarasimha Garla added a comment -

          Will have chk once, please wait till EOD...

          Show
          naganarasimha_gr@apache.org Naganarasimha Garla added a comment - Will have chk once, please wait till EOD...
          Hide
          eepayne Eric Payne added a comment -

          If you think it would be cleaner, I can combine the patches and submit it as one patch here in YARN-3362.

          I think its looks fine for me.

          Thanks, Sunil G, for your comments. Naganarasimha G R, given that we will keep the patches separate, do you want to comment further on the 2.7 patch for YARN-3362?

          Show
          eepayne Eric Payne added a comment - If you think it would be cleaner, I can combine the patches and submit it as one patch here in YARN-3362 . I think its looks fine for me. Thanks, Sunil G , for your comments. Naganarasimha G R , given that we will keep the patches separate, do you want to comment further on the 2.7 patch for YARN-3362 ?
          Hide
          sunilg Sunil G added a comment -

          I tested with both patches, and metrics was coming fine (tested w/o labels and with labels). But when I uploaded only YARN-3362 patch, then same pblm came as NGarla_Unused has mentioned.

          In YARN-4751, AbstractCSQueue changes helps to display queue Used Capacity correctly. I think that was the differentiated change for this.

          If you think it would be cleaner, I can combine the patches and submit it as one patch here in YARN-3362.

          I think its looks fine for me.

          Show
          sunilg Sunil G added a comment - I tested with both patches, and metrics was coming fine (tested w/o labels and with labels). But when I uploaded only YARN-3362 patch, then same pblm came as NGarla_Unused has mentioned. In YARN-4751 , AbstractCSQueue changes helps to display queue Used Capacity correctly. I think that was the differentiated change for this. If you think it would be cleaner, I can combine the patches and submit it as one patch here in YARN-3362 . I think its looks fine for me.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Oops let me apply both and test i had applied only the current patch(YARN-3362-branch-2.7.002.patch ). Thanks for the update!

          Show
          Naganarasimha Naganarasimha G R added a comment - Oops let me apply both and test i had applied only the current patch( YARN-3362 -branch-2.7.002.patch ). Thanks for the update!
          Hide
          eepayne Eric Payne added a comment -

          After applying the patch then partition usage information needs to be displayed for the queue under the partition. But after applying the patch i was not able to see the stats in the webui even though the application is submitted to partition and it is running

          Naganarasimha G R, in order to see the stats and bar graph metrics, both YARN-3362-branch-2.7.002.patch and YARN-4751-branch-2.7.004.patch need to be applied. First YARN-3362-branch-2.7.002.patch should be applied and then YARN-4751-branch-2.7.004.patch. Did you do that and are still seeing problems?

          If you think it would be cleaner, I can combine the patches and submit it as one patch here in YARN-3362.

          Show
          eepayne Eric Payne added a comment - After applying the patch then partition usage information needs to be displayed for the queue under the partition. But after applying the patch i was not able to see the stats in the webui even though the application is submitted to partition and it is running Naganarasimha G R , in order to see the stats and bar graph metrics, both YARN-3362-branch-2.7.002.patch and YARN-4751-branch-2.7.004.patch need to be applied. First YARN-3362 -branch-2.7.002.patch should be applied and then YARN-4751 -branch-2.7.004.patch. Did you do that and are still seeing problems? If you think it would be cleaner, I can combine the patches and submit it as one patch here in YARN-3362 .
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Eric Payne,
          Thanks for providing the back ground, i went through the comments which you mentioned but still not able to understand whether the patch what you have shared would work independently ? After applying the patch then partition usage information needs to be displayed for the queue under the partition. But after applying the patch i was not able to see the stats in the webui even though the application is submitted to partition and it is running refer AppInLabelXnoStatsInSchedPage.png. And if i submit to default parition then webui stats are proper.
          If another patch is required to get the webui working, may be you can share the patch then i can review. Also dint have time to cross check why it was not working if required can do the analysis.

          And for other jira's its good to have and not compulsary

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Eric Payne , Thanks for providing the back ground, i went through the comments which you mentioned but still not able to understand whether the patch what you have shared would work independently ? After applying the patch then partition usage information needs to be displayed for the queue under the partition. But after applying the patch i was not able to see the stats in the webui even though the application is submitted to partition and it is running refer AppInLabelXnoStatsInSchedPage.png . And if i submit to default parition then webui stats are proper. If another patch is required to get the webui working, may be you can share the patch then i can review. Also dint have time to cross check why it was not working if required can do the analysis. And for other jira's its good to have and not compulsary
          Hide
          eepayne Eric Payne added a comment -
          • When i submit a app to a queue with default node label expression set to a partition then in the scheduler page stats was not getting displayed.

          Other issues which are good to be backported :

          1. i think there is one other jira which was capturing the queue's default node label expression that would also be helpful to be back ported
          2. App's AM's label is also good to be captured in the web ui and displayed.
          3. would it be good to back port MAPREDUCE-6304 ?

          Naganarasimha G R, thank you very much for your review and comments.

          I'm sorry I did not make it clear in this JIRA. Please refer to conversations in YARN-4751 between me, Wangda Tan, and Sunil G.

          The problem is that in order to provide all of the needed functionality in 2.7, we either have to backport several JIRAs which would destabilize 2.7 (please see this comment and this comment) or we need to pick and choose pieces to backport without doing full cherry-picks.

          After the discussion in YARN-4751, we thought it would be best to submit a 2.7 patch here that only backported YARN-3362 and then add the fixes for the metrics in YARN-4751. Please let me know what you think.

          Show
          eepayne Eric Payne added a comment - When i submit a app to a queue with default node label expression set to a partition then in the scheduler page stats was not getting displayed. Other issues which are good to be backported : i think there is one other jira which was capturing the queue's default node label expression that would also be helpful to be back ported App's AM's label is also good to be captured in the web ui and displayed. would it be good to back port MAPREDUCE-6304 ? Naganarasimha G R , thank you very much for your review and comments. I'm sorry I did not make it clear in this JIRA. Please refer to conversations in YARN-4751 between me, Wangda Tan , and Sunil G . The problem is that in order to provide all of the needed functionality in 2.7, we either have to backport several JIRAs which would destabilize 2.7 (please see this comment and this comment ) or we need to pick and choose pieces to backport without doing full cherry-picks. After the discussion in YARN-4751 , we thought it would be best to submit a 2.7 patch here that only backported YARN-3362 and then add the fixes for the metrics in YARN-4751 . Please let me know what you think.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          attached AppInLabelXnoStatsInSchedPage.png for the same tried submitting to default queue and other hierarchical queue both had the same behavior

          Show
          Naganarasimha Naganarasimha G R added a comment - attached AppInLabelXnoStatsInSchedPage.png for the same tried submitting to default queue and other hierarchical queue both had the same behavior
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Eric Payne, i was trying to compile, install and test, one major blocker was :

          • When i submit a app to a queue with default node label expression set to a partition then in the scheduler page stats was not getting displayed.

          Other issues which are good to be backported :

          1. i think there is one other jira which was capturing the queue's default node label expression that would also be helpful to be back ported
          2. App's AM's label is also good to be captured in the web ui and displayed.
          3. would it be good to back port MAPREDUCE-6304 ?
          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Eric Payne , i was trying to compile, install and test, one major blocker was : When i submit a app to a queue with default node label expression set to a partition then in the scheduler page stats was not getting displayed. Other issues which are good to be backported : i think there is one other jira which was capturing the queue's default node label expression that would also be helpful to be back ported App's AM's label is also good to be captured in the web ui and displayed. would it be good to back port MAPREDUCE-6304 ?
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks Eric Payne, Tan, Wangda, & Sunil G for waiting, taking a look at it shortly ...

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks Eric Payne , Tan, Wangda , & Sunil G for waiting, taking a look at it shortly ...
          Hide
          sunilg Sunil G added a comment -

          Thanks Eric Payne for sharing patch here. Very much appreciate the same.
          Overall patch looks fine for me. Will wait for NGarla_Unused also.

          Show
          sunilg Sunil G added a comment - Thanks Eric Payne for sharing patch here. Very much appreciate the same. Overall patch looks fine for me. Will wait for NGarla_Unused also.
          Hide
          leftnoteasy Wangda Tan added a comment -
          Show
          leftnoteasy Wangda Tan added a comment - + Naganarasimha G R .
          Hide
          eepayne Eric Payne added a comment -

          Wangda Tan and Sunil G, as we discussed in YARN-4751, I am attaching the 2.7 backport of YARN-3362 to this JIRA. Please see YARN-4751 to for a discussion of the additional changes needed for 2.7 that will provide accurate metrics on labeled queues.

          2.7-specific differences in YARN-3362-branch-2.7.002.patch:

          • The RMNodeLabel class is named NodeLabel in 2.7.
          • The CapacitySchedulerInfo constructor doesn't have the CapacityScheduler parameter in 2.7, so I left it out.
          • One of the changes for CapacitySchedulerInfo#getQueues in YARN-3362 added a check to skip non-accessible queues. In 2.7, there are 2 loops that iterate over the queues whereas in 2.8 and beyond, there is only 1. I put the check in the first loop because the first loop is filtering the queues to be processed by the second loop
          Show
          eepayne Eric Payne added a comment - Wangda Tan and Sunil G , as we discussed in YARN-4751 , I am attaching the 2.7 backport of YARN-3362 to this JIRA. Please see YARN-4751 to for a discussion of the additional changes needed for 2.7 that will provide accurate metrics on labeled queues. 2.7-specific differences in YARN-3362 -branch-2.7.002.patch : The RMNodeLabel class is named NodeLabel in 2.7. The CapacitySchedulerInfo constructor doesn't have the CapacityScheduler parameter in 2.7, so I left it out. One of the changes for CapacitySchedulerInfo#getQueues in YARN-3362 added a check to skip non-accessible queues. In 2.7, there are 2 loops that iterate over the queues whereas in 2.8 and beyond, there is only 1. I put the check in the first loop because the first loop is filtering the queues to be processed by the second loop
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2143 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2143/)
          YARN-3362. Add node label usage in RM CapacityScheduler web UI. (Naganarasimha G R via wangda) (wangda: rev 0e85044e26da698c45185585310ae0e99448cd80)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2143 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2143/ ) YARN-3362 . Add node label usage in RM CapacityScheduler web UI. (Naganarasimha G R via wangda) (wangda: rev 0e85044e26da698c45185585310ae0e99448cd80) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #185 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/185/)
          YARN-3362. Add node label usage in RM CapacityScheduler web UI. (Naganarasimha G R via wangda) (wangda: rev 0e85044e26da698c45185585310ae0e99448cd80)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerInfo.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #185 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/185/ ) YARN-3362 . Add node label usage in RM CapacityScheduler web UI. (Naganarasimha G R via wangda) (wangda: rev 0e85044e26da698c45185585310ae0e99448cd80) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerInfo.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #2125 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2125/)
          YARN-3362. Add node label usage in RM CapacityScheduler web UI. (Naganarasimha G R via wangda) (wangda: rev 0e85044e26da698c45185585310ae0e99448cd80)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerInfo.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2125 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2125/ ) YARN-3362 . Add node label usage in RM CapacityScheduler web UI. (Naganarasimha G R via wangda) (wangda: rev 0e85044e26da698c45185585310ae0e99448cd80) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerInfo.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #195 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/195/)
          YARN-3362. Add node label usage in RM CapacityScheduler web UI. (Naganarasimha G R via wangda) (wangda: rev 0e85044e26da698c45185585310ae0e99448cd80)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #195 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/195/ ) YARN-3362 . Add node label usage in RM CapacityScheduler web UI. (Naganarasimha G R via wangda) (wangda: rev 0e85044e26da698c45185585310ae0e99448cd80) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #196 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/196/)
          YARN-3362. Add node label usage in RM CapacityScheduler web UI. (Naganarasimha G R via wangda) (wangda: rev 0e85044e26da698c45185585310ae0e99448cd80)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #196 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/196/ ) YARN-3362 . Add node label usage in RM CapacityScheduler web UI. (Naganarasimha G R via wangda) (wangda: rev 0e85044e26da698c45185585310ae0e99448cd80) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #927 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/927/)
          YARN-3362. Add node label usage in RM CapacityScheduler web UI. (Naganarasimha G R via wangda) (wangda: rev 0e85044e26da698c45185585310ae0e99448cd80)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #927 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/927/ ) YARN-3362 . Add node label usage in RM CapacityScheduler web UI. (Naganarasimha G R via wangda) (wangda: rev 0e85044e26da698c45185585310ae0e99448cd80) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks for reviewing and committing the jira, Tan, Wangda, And Yes it would be better to discuss these CS queue hierarchy modifications in YARN-3638, will update our discussions there.

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks for reviewing and committing the jira, Tan, Wangda , And Yes it would be better to discuss these CS queue hierarchy modifications in YARN-3638 , will update our discussions there.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #7825 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7825/)
          YARN-3362. Add node label usage in RM CapacityScheduler web UI. (Naganarasimha G R via wangda) (wangda: rev 0e85044e26da698c45185585310ae0e99448cd80)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #7825 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7825/ ) YARN-3362 . Add node label usage in RM CapacityScheduler web UI. (Naganarasimha G R via wangda) (wangda: rev 0e85044e26da698c45185585310ae0e99448cd80) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
          Hide
          leftnoteasy Wangda Tan added a comment -

          Committed to trunk/branch-2, thanks Naganarasimha G R and review from Vinod Kumar Vavilapalli.

          Show
          leftnoteasy Wangda Tan added a comment - Committed to trunk/branch-2, thanks Naganarasimha G R and review from Vinod Kumar Vavilapalli .
          Hide
          leftnoteasy Wangda Tan added a comment -

          Last call for comments, as I'm planning to commit today.

          Show
          leftnoteasy Wangda Tan added a comment - Last call for comments, as I'm planning to commit today.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Naganarasimha G R,
          Thanks for pointing YARN-3638, I think what you want can be addressed or at least more related YARN-3638, right? I suggest to move discussion to YARN-3638 to make this JIRA more concentrated.

          Show
          leftnoteasy Wangda Tan added a comment - Naganarasimha G R , Thanks for pointing YARN-3638 , I think what you want can be addressed or at least more related YARN-3638 , right? I suggest to move discussion to YARN-3638 to make this JIRA more concentrated.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks for the comment Tan, Wangda,
          This idea is also good but i can think of few issues here

          • In most cases Queue hierarchy will be collapsed so the space between non leaf queue name and the bar might not look good
          • Bigger the name of the leaf queues more space between the queue name and the bar (at the upper level queues)
          • Space based calculation might not work out so implementing it with Hammock might be little complex.
          • Here were you thinking of publishing absolute usage or usage @ queue level, I feel absolute usage would be better (YARN-3638 also suggest it), but existing shows just usage @ queue level.

          Also i feel these modifications can take time, if this jira is important then we can push this in and we can further discuss about UI enhancements in another jira. thoughts?

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks for the comment Tan, Wangda , This idea is also good but i can think of few issues here In most cases Queue hierarchy will be collapsed so the space between non leaf queue name and the bar might not look good Bigger the name of the leaf queues more space between the queue name and the bar (at the upper level queues) Space based calculation might not work out so implementing it with Hammock might be little complex. Here were you thinking of publishing absolute usage or usage @ queue level, I feel absolute usage would be better ( YARN-3638 also suggest it), but existing shows just usage @ queue level. Also i feel these modifications can take time, if this jira is important then we can push this in and we can further discuss about UI enhancements in another jira. thoughts?
          Hide
          leftnoteasy Wangda Tan added a comment -

          Naganarasimha G R, thanks for updating, the latest result looks great!

          About the queue-hierarchy discussion, I think maybe one alternative is, make a hierarchy queue names, but align usage bars, like following:

          root        [------------------------------ 100% used]
            - a       [----------------- 60% used]
              - a1    [------------- 40% used]
              - a2    [-----------]
            - b       [-------------]
              - b1    [---------]
                - b11 [----------]
          

          Which can also help comparing queue's resource but doesn't need a extra button to hide/show queue hierarchy?

          Show
          leftnoteasy Wangda Tan added a comment - Naganarasimha G R , thanks for updating, the latest result looks great! About the queue-hierarchy discussion, I think maybe one alternative is, make a hierarchy queue names, but align usage bars, like following: root [------------------------------ 100% used] - a [----------------- 60% used] - a1 [------------- 40% used] - a2 [-----------] - b [-------------] - b1 [---------] - b11 [----------] Which can also help comparing queue's resource but doesn't need a extra button to hide/show queue hierarchy?
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 14m 47s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 1 new or modified test files.
          +1 javac 7m 50s There were no new javac warning messages.
          +1 javadoc 10m 6s There were no new javadoc warning messages.
          +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings.
          +1 checkstyle 0m 30s There were no new checkstyle issues.
          +1 whitespace 0m 2s The patch has no lines that end in whitespace.
          +1 install 1m 35s mvn install still works.
          +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse.
          +1 findbugs 1m 16s The patch does not introduce any new Findbugs (version 2.0.3) warnings.
          -1 yarn tests 51m 59s Tests failed in hadoop-yarn-server-resourcemanager.
              89m 6s  



          Reason Tests
          Timed out tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerQueueACLs



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12732150/YARN-3362.20150512-1.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / 987abc9
          hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/7878/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/7878/testReport/
          Java 1.7.0_55
          uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/7878/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 14m 47s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 1 new or modified test files. +1 javac 7m 50s There were no new javac warning messages. +1 javadoc 10m 6s There were no new javadoc warning messages. +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings. +1 checkstyle 0m 30s There were no new checkstyle issues. +1 whitespace 0m 2s The patch has no lines that end in whitespace. +1 install 1m 35s mvn install still works. +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse. +1 findbugs 1m 16s The patch does not introduce any new Findbugs (version 2.0.3) warnings. -1 yarn tests 51m 59s Tests failed in hadoop-yarn-server-resourcemanager.     89m 6s   Reason Tests Timed out tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerQueueACLs Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12732150/YARN-3362.20150512-1.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 987abc9 hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/7878/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/7878/testReport/ Java 1.7.0_55 uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/7878/console This message was automatically generated.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Tan, Wangda
          Have updated the patch and image with appending label with its available resource

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda Have updated the patch and image with appending label with its available resource
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks for the comment Tan, Wangda, Here are my comments inline

          The default of max-cap is 100 because queue can use such resource without configure it. Let me know if you have more questions.

          IIUC, the current code only shows in the UI as 100% that too at the location which i mentioned ( CapacitySchedulerQueueInfo ln 73) but actual absolute capacity is zero which makes like no bar is shown. I think its a bug in the UI and as well it should be in the way we are discussing(max-cap is 100 because queue can use such resource without configure it), let me test and raise it.

          About showing resources of partitions, I think it's very helpful. I think you can include used-resource of each partition as well, You can file a separate ticket if it is hard to be added with this ticket.

          I think its a simple change, give me some time will get the modifications done.

          About "Hide Hierarchy", I think it's good for queue capacity comparison, but admin may get confused after checked "Hide Hierarchy", it's better to be added to some other places instead of modify queue UI itself.

          I meant, label on the button should also be toggling, i.e. when hierarchies are shown then label can be Hide Hierarchy / Show only Leaf Queues and when only Leaf Queues we can have label as Show Queue Hierarchy. Well i think if the idea is ok can raise and handle in another jira as it involves lot more discussions but i felt its particularly useful in our labels view as there are too many things displayed in the UI and more the labels or more queue hierarchy then it will be come more chaotic.

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks for the comment Tan, Wangda , Here are my comments inline The default of max-cap is 100 because queue can use such resource without configure it. Let me know if you have more questions. IIUC, the current code only shows in the UI as 100% that too at the location which i mentioned ( CapacitySchedulerQueueInfo ln 73) but actual absolute capacity is zero which makes like no bar is shown. I think its a bug in the UI and as well it should be in the way we are discussing(max-cap is 100 because queue can use such resource without configure it), let me test and raise it. About showing resources of partitions, I think it's very helpful. I think you can include used-resource of each partition as well, You can file a separate ticket if it is hard to be added with this ticket. I think its a simple change, give me some time will get the modifications done. About "Hide Hierarchy", I think it's good for queue capacity comparison, but admin may get confused after checked "Hide Hierarchy", it's better to be added to some other places instead of modify queue UI itself. I meant, label on the button should also be toggling, i.e. when hierarchies are shown then label can be Hide Hierarchy / Show only Leaf Queues and when only Leaf Queues we can have label as Show Queue Hierarchy . Well i think if the idea is ok can raise and handle in another jira as it involves lot more discussions but i felt its particularly useful in our labels view as there are too many things displayed in the UI and more the labels or more queue hierarchy then it will be come more chaotic.
          Hide
          leftnoteasy Wangda Tan added a comment -

          The latest patch LGTM.

          Show
          leftnoteasy Wangda Tan added a comment - The latest patch LGTM.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Naga,
          Thanks for updating,

          1) To your questions: https://issues.apache.org/jira/browse/YARN-3362?focusedCommentId=14537181&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14537181,
          You can refer to YARN-2824 for more information about why default cap of labeled resource set to zero.
          The default of max-cap is 100 because queue can use such resource without configure it. Let me know if you have more questions.

          2) About showing resources of partitions, I think it's very helpful. I think you can include used-resource of each partition as well, You can file a separate ticket if it is hard to be added with this ticket.

          3) About "Hide Hierarchy", I think it's good for queue capacity comparison, but admin may get confused after checked "Hide Hierarchy", it's better to be added to some other places instead of modify queue UI itself.

          Show
          leftnoteasy Wangda Tan added a comment - Hi Naga, Thanks for updating, 1) To your questions: https://issues.apache.org/jira/browse/YARN-3362?focusedCommentId=14537181&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14537181 , You can refer to YARN-2824 for more information about why default cap of labeled resource set to zero. The default of max-cap is 100 because queue can use such resource without configure it. Let me know if you have more questions. 2) About showing resources of partitions, I think it's very helpful. I think you can include used-resource of each partition as well, You can file a separate ticket if it is hard to be added with this ticket. 3) About "Hide Hierarchy", I think it's good for queue capacity comparison, but admin may get confused after checked "Hide Hierarchy", it's better to be added to some other places instead of modify queue UI itself.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 14m 39s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 1 new or modified test files.
          +1 javac 7m 34s There were no new javac warning messages.
          +1 javadoc 9m 35s There were no new javadoc warning messages.
          +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings.
          -1 checkstyle 1m 46s The applied patch generated 12 new checkstyle issues (total was 145, now 144).
          +1 whitespace 0m 2s The patch has no lines that end in whitespace.
          +1 install 1m 42s mvn install still works.
          +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse.
          +1 findbugs 1m 16s The patch does not introduce any new Findbugs (version 2.0.3) warnings.
          +1 yarn tests 52m 10s Tests passed in hadoop-yarn-server-resourcemanager.
              89m 44s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12731834/YARN-3362.20150511-1.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / 4536399
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/7857/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt
          hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/7857/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/7857/testReport/
          Java 1.7.0_55
          uname Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/7857/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 14m 39s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 1 new or modified test files. +1 javac 7m 34s There were no new javac warning messages. +1 javadoc 9m 35s There were no new javadoc warning messages. +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 1m 46s The applied patch generated 12 new checkstyle issues (total was 145, now 144). +1 whitespace 0m 2s The patch has no lines that end in whitespace. +1 install 1m 42s mvn install still works. +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse. +1 findbugs 1m 16s The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 yarn tests 52m 10s Tests passed in hadoop-yarn-server-resourcemanager.     89m 44s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12731834/YARN-3362.20150511-1.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 4536399 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/7857/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/7857/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/7857/testReport/ Java 1.7.0_55 uname Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/7857/console This message was automatically generated.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Fixing the valid check style issues for the patch (more than 80 chars are not corrected for readability purposes)

          Show
          Naganarasimha Naganarasimha G R added a comment - Fixing the valid check style issues for the patch (more than 80 chars are not corrected for readability purposes)
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 14m 43s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 1 new or modified test files.
          +1 javac 7m 35s There were no new javac warning messages.
          +1 javadoc 9m 36s There were no new javadoc warning messages.
          +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings.
          -1 checkstyle 0m 59s The applied patch generated 15 new checkstyle issues (total was 145, now 147).
          +1 whitespace 0m 2s The patch has no lines that end in whitespace.
          +1 install 1m 39s mvn install still works.
          +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse.
          +1 findbugs 1m 16s The patch does not introduce any new Findbugs (version 2.0.3) warnings.
          +1 yarn tests 52m 19s Tests passed in hadoop-yarn-server-resourcemanager.
              89m 12s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12731796/YARN-3362.20150510-1.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / 4536399
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/7853/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt
          hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/7853/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/7853/testReport/
          Java 1.7.0_55
          uname Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/7853/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 14m 43s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 1 new or modified test files. +1 javac 7m 35s There were no new javac warning messages. +1 javadoc 9m 36s There were no new javadoc warning messages. +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 0m 59s The applied patch generated 15 new checkstyle issues (total was 145, now 147). +1 whitespace 0m 2s The patch has no lines that end in whitespace. +1 install 1m 39s mvn install still works. +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse. +1 findbugs 1m 16s The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 yarn tests 52m 19s Tests passed in hadoop-yarn-server-resourcemanager.     89m 12s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12731796/YARN-3362.20150510-1.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 4536399 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/7853/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/7853/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/7853/testReport/ Java 1.7.0_55 uname Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/7853/console This message was automatically generated.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Also few suggestions in UI :

          • Earlier over all available resource info was available in cluster metrics and hence i think it was easier for viewer to relate the percentages but here available resource info for a given label is present only in nodelabels page. so i would suggest to have label resource info like :
            + Partition: xxx [memory:8192, vCores:8]
              + Queue: root
                 + Queue: a
                 + Queue: b
            + Partition: yyy [memory:4096, vCores:8]
              + Queue: root
                 + Queue: a
                 + Queue: b
            
          • I think queue hierarchy is not always helpful when i want to relatively compare capacities among leaf queue (through bars), so i would suggest additional view without queue hierarchy when user clicks on a button [ kind of toggle button].
                            -------------------
                            | Hide Hierarchy |
                            -------------------
            + Partition: xxx [memory:8192, vCores:8]
              + Queue Path: root.Q1.a
                ....
              + Queue Path: default
            + Partition: yyy [memory:4096, vCores:8]
              + Queue Path: root.Q1.a
                ....
              - Queue Path: default
                   |---------  Partition Specific Metrics -----------|
                   |                 ........                        |
                   |-------------------------------------------------|
                   |---------  Queue General Metrics ----------------|
                   |                 ........                        |
                   |-------------------------------------------------|
                   Active user info:
                   ........ Table ..........
            
          Show
          Naganarasimha Naganarasimha G R added a comment - Also few suggestions in UI : Earlier over all available resource info was available in cluster metrics and hence i think it was easier for viewer to relate the percentages but here available resource info for a given label is present only in nodelabels page. so i would suggest to have label resource info like : + Partition: xxx [memory:8192, vCores:8] + Queue: root + Queue: a + Queue: b + Partition: yyy [memory:4096, vCores:8] + Queue: root + Queue: a + Queue: b I think queue hierarchy is not always helpful when i want to relatively compare capacities among leaf queue (through bars), so i would suggest additional view without queue hierarchy when user clicks on a button [ kind of toggle button]. ------------------- | Hide Hierarchy | ------------------- + Partition: xxx [memory:8192, vCores:8] + Queue Path: root.Q1.a .... + Queue Path: default + Partition: yyy [memory:4096, vCores:8] + Queue Path: root.Q1.a .... - Queue Path: default |--------- Partition Specific Metrics -----------| | ........ | |-------------------------------------------------| |--------- Queue General Metrics ----------------| | ........ | |-------------------------------------------------| Active user info: ........ Table ..........
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Tan, Wangda
          Updating the patch based on your review comments
          While testing came across few issues

          • why all labels are default accessible @ root? currently have removed label.getIsExclusive() && !((AbstractCSQueue) root).accessibleToPartition(label) check from CapacitySchedulerPage for root queue for each label as its always true
          • currently accessibility of labels for non root queues, if not specified is * as it inherits from parent queue. Is this right ?
          • when new partition is added, then the configured max capacity is shown as 100 but abs max capacity is shown as 0 dint understand why this is handled in this way in UI side CapacitySchedulerQueueInfo(ln 73)
              
              if (maxCapacity < EPSILON || maxCapacity > 1f)
                  maxCapacity = 1f;
            

            my guess is code has been added to show that its in the range of 0 to 1f ? if so maxCapacity < EPSILON then maxCapacity=0 and if maxCapacity > 1f then maxCapacity=1f right ?
            Also my doubt is why keep max capacity as zero if label is accessible to a queue, if max capacity is not specified then it should be 100 right ?

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda Updating the patch based on your review comments While testing came across few issues why all labels are default accessible @ root? currently have removed label.getIsExclusive() && !((AbstractCSQueue) root).accessibleToPartition(label) check from CapacitySchedulerPage for root queue for each label as its always true currently accessibility of labels for non root queues, if not specified is * as it inherits from parent queue. Is this right ? when new partition is added, then the configured max capacity is shown as 100 but abs max capacity is shown as 0 dint understand why this is handled in this way in UI side CapacitySchedulerQueueInfo(ln 73) if (maxCapacity < EPSILON || maxCapacity > 1f) maxCapacity = 1f; my guess is code has been added to show that its in the range of 0 to 1f ? if so maxCapacity < EPSILON then maxCapacity=0 and if maxCapacity > 1f then maxCapacity=1f right ? Also my doubt is why keep max capacity as zero if label is accessible to a queue, if max capacity is not specified then it should be 100 right ?
          Hide
          leftnoteasy Wangda Tan added a comment -

          Put some considerable amount of effort, dint workout. Hamlet should have add some kind of doc or a book, its seems like an enigma code to me. Will file a separate jira as its required in many places of CS page : "Active Users Info" and following queue, in between Dump scheduler log button and Application Queue , In CS Health Block

          Thanks for looking at this, that sounds good, we can address of them together in a separated JIRA.

          Show
          leftnoteasy Wangda Tan added a comment - Put some considerable amount of effort, dint workout. Hamlet should have add some kind of doc or a book, its seems like an enigma code to me. Will file a separate jira as its required in many places of CS page : "Active Users Info" and following queue, in between Dump scheduler log button and Application Queue , In CS Health Block Thanks for looking at this, that sounds good, we can address of them together in a separated JIRA.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks for the review Tan, Wangda,

          Also not caused by your patch, there's no space between "Active Users Info" and following queue, I'm not sure if there's any easy fix can do, please feel free to file a separate ticket if it will be hard to be solved together.

          Put some considerable amount of effort, dint workout. Hamlet should have add some kind of doc or a book, its seems like an enigma code to me. Will file a separate jira as its required in many places of CS page : "Active Users Info" and following queue, in between Dump scheduler log button and Application Queue , In CS Health Block
          For other comments will rework ASAP.

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks for the review Tan, Wangda , Also not caused by your patch, there's no space between "Active Users Info" and following queue, I'm not sure if there's any easy fix can do, please feel free to file a separate ticket if it will be hard to be solved together. Put some considerable amount of effort, dint workout. Hamlet should have add some kind of doc or a book, its seems like an enigma code to me. Will file a separate jira as its required in many places of CS page : "Active Users Info" and following queue, in between Dump scheduler log button and Application Queue , In CS Health Block For other comments will rework ASAP.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Attached "https://issues.apache.org/jira/secure/attachment/12731236/No-space-between-Active_user_info-and-next-queues.png" to showing there's no space between Active User Info and next queue.

          Show
          leftnoteasy Wangda Tan added a comment - Attached "https://issues.apache.org/jira/secure/attachment/12731236/No-space-between-Active_user_info-and-next-queues.png" to showing there's no space between Active User Info and next queue.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Naganarasimha G R,
          Thanks a lot for updating, looks much better now! I still have few minor comments:

          For UI:

          1) "Configured Capacity", "Configured Max Capacity" should be a part of "Queue Status for Partition..."?
          2) Not caused by your patch, Absolute Capacity should be "Absolute Configured Capacity", Absolute Max Capacity should be "Absolute Configured Max Capacity", could you update them in your patch?
          3) Also not caused by your patch, there's no space between "Active Users Info" and following queue, I'm not sure if there's any easy fix can do, please feel free to file a separate ticket if it will be hard to be solved together.

          For implementation:
          1) One minor comment for style is, you can merge all capacities-related rendering in CapacitySchedulerPage to a method similar to renderCommonLeafQueueInfo, which you can merge some implementation for render and renderLeafQueueInfoWithoutParition. And add a method renderLeafQueueInfoWithParition to make render looks cleaner.

          For your question

          May be i did not get this completely. label is exclusive-label i have done in CapacitySchedulerPage.QueuesBlock.render(l num - 357)

          I think for both CapacitySchedulerPage and CapacitySchedulerInfo, it should be:

          	if (label.getIsExclusive()
                  && !((AbstractCSQueue) root).accessibleToPartition(label.getLabelName())) {
          

          When label is exclusive (nobody can use the label except queue is accessible to the label) and (queue isn't accessible to the label), we don't need to continue.

          Let me know your thoughts.

          CC: Vinod Kumar Vavilapalli/Jian He.

          Show
          leftnoteasy Wangda Tan added a comment - Hi Naganarasimha G R , Thanks a lot for updating, looks much better now! I still have few minor comments: For UI: 1) "Configured Capacity", "Configured Max Capacity" should be a part of "Queue Status for Partition..."? 2) Not caused by your patch, Absolute Capacity should be "Absolute Configured Capacity", Absolute Max Capacity should be "Absolute Configured Max Capacity", could you update them in your patch? 3) Also not caused by your patch, there's no space between "Active Users Info" and following queue, I'm not sure if there's any easy fix can do, please feel free to file a separate ticket if it will be hard to be solved together. For implementation: 1) One minor comment for style is, you can merge all capacities-related rendering in CapacitySchedulerPage to a method similar to renderCommonLeafQueueInfo, which you can merge some implementation for render and renderLeafQueueInfoWithoutParition . And add a method renderLeafQueueInfoWithParition to make render looks cleaner. For your question May be i did not get this completely. label is exclusive-label i have done in CapacitySchedulerPage.QueuesBlock.render(l num - 357) I think for both CapacitySchedulerPage and CapacitySchedulerInfo, it should be: if (label.getIsExclusive() && !((AbstractCSQueue) root).accessibleToPartition(label.getLabelName())) { When label is exclusive (nobody can use the label except queue is accessible to the label) and (queue isn't accessible to the label), we don't need to continue. Let me know your thoughts. CC: Vinod Kumar Vavilapalli / Jian He .
          Hide
          hadoopqa Hadoop QA added a comment -



          +1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 15m 10s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 1 new or modified test files.
          +1 javac 7m 52s There were no new javac warning messages.
          +1 javadoc 9m 53s There were no new javadoc warning messages.
          +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings.
          +1 checkstyle 0m 26s There were no new checkstyle issues.
          +1 whitespace 0m 2s The patch has no lines that end in whitespace.
          +1 install 1m 37s mvn install still works.
          +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse.
          +1 findbugs 1m 16s The patch does not introduce any new Findbugs (version 2.0.3) warnings.
          +1 yarn tests 58m 49s Tests passed in hadoop-yarn-server-resourcemanager.
              96m 4s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12731201/YARN-3362.20150507-1.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / 8e991f4
          hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/7762/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/7762/testReport/
          Java 1.7.0_55
          uname Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/7762/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 pre-patch 15m 10s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 1 new or modified test files. +1 javac 7m 52s There were no new javac warning messages. +1 javadoc 9m 53s There were no new javadoc warning messages. +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings. +1 checkstyle 0m 26s There were no new checkstyle issues. +1 whitespace 0m 2s The patch has no lines that end in whitespace. +1 install 1m 37s mvn install still works. +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse. +1 findbugs 1m 16s The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 yarn tests 58m 49s Tests passed in hadoop-yarn-server-resourcemanager.     96m 4s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12731201/YARN-3362.20150507-1.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 8e991f4 hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/7762/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/7762/testReport/ Java 1.7.0_55 uname Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/7762/console This message was automatically generated.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Tan, Wangda,
          Please find the new patch addressing your previous comments, Will raise new jira for handling modifications in the Node Labels Page.

          queue can be ignored when: queue cannot access to label AND label is exclusive-label. accessibleToPartition in AbstractCSQueue already considered when label==""

          May be i did not get this completely. label is exclusive-label i have done in CapacitySchedulerPage.QueuesBlock.render(l num - 357)

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda , Please find the new patch addressing your previous comments, Will raise new jira for handling modifications in the Node Labels Page. queue can be ignored when: queue cannot access to label AND label is exclusive-label. accessibleToPartition in AbstractCSQueue already considered when label=="" May be i did not get this completely. label is exclusive-label i have done in CapacitySchedulerPage.QueuesBlock.render(l num - 357)
          Hide
          leftnoteasy Wangda Tan added a comment -

          Attached folded/expanded queues screenshots.

          Show
          leftnoteasy Wangda Tan added a comment - Attached folded/expanded queues screenshots.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Naga,
          Thanks for updating, I took a look at closer look at latest patch, and tried in my local cluster, some comments in code:
          1) There's some duplicated code for CapacitySchedulerInfo, you only need keep one constructor, which takes label always
          2) Also, protected CapacitySchedulerQueueInfoList getQueues doesn't need to do specifial logic when null == nodeLabel, queue can be ignored when: queue cannot access to label AND label is exclusive-label. accessibleToPartition in AbstractCSQueue already considered when label==""
          3) CapacitySchedulerQueueInfo doesn't need two constructor as well
          4) When you want to get "NO_LABEL", you should pass "RMNodeLabelsManager.NO_LABEL" instead of null.

          Some comments in UI:
          1) I've reconsidered the hierarchy, I think it's better to rename "Node_Label=<...>" to "Partition:<...>". For the Node Labels page, we should add a column which is "Node Label Type", and indicate they're partition.
          2) <NO_LABEL> should be <DEFAULT_PARTITION>, <NO_LABEL> itself may not clear enough, also in Node Labels page, we should change <NO_LABEL> to the same name here. Changes in Node Labels page could add in separated JIRA.
          3) It's better to indicate queue in hierarchy also, now we have:

          + Partition: xxx
            + root
               + a
               + b
          

          I think it's better to add a "Queue:" before queue name:

          + Partition: xxx
            + Queue: root
               + Queue: a
               + Queue: b
          

          Otherwise people may get confused which one is partition and which one is queue.

          4) In queue's metrics overview table, it mixing partition-specific and queue-generic metrics is not clear enough, I'm not sure if you can show it like this (If existing Hamlet framework supports this or not):

          + Partition: xxx
            + Queue: root
               + Queue: a
                 |---------  Partition Specific Metrics -----------|
                 |  Used Capacity      |    0.0%                   |
                 |  Absolute Capacity  |   50.0%                   |
                             ........
                 |-------------------------------------------------|
          
                 |---------  Queue General Metrics ----------------|
                 |          Other Metrics                          |
                 ---------------------------------------------------
          
                 Active user info:
                 ........ Table ..........
          
               + Queue: b
          
          Show
          leftnoteasy Wangda Tan added a comment - Hi Naga, Thanks for updating, I took a look at closer look at latest patch, and tried in my local cluster, some comments in code: 1) There's some duplicated code for CapacitySchedulerInfo, you only need keep one constructor, which takes label always 2) Also, protected CapacitySchedulerQueueInfoList getQueues doesn't need to do specifial logic when null == nodeLabel , queue can be ignored when: queue cannot access to label AND label is exclusive-label . accessibleToPartition in AbstractCSQueue already considered when label=="" 3) CapacitySchedulerQueueInfo doesn't need two constructor as well 4) When you want to get "NO_LABEL", you should pass "RMNodeLabelsManager.NO_LABEL" instead of null. Some comments in UI: 1) I've reconsidered the hierarchy, I think it's better to rename "Node_Label=<...>" to "Partition:<...>". For the Node Labels page, we should add a column which is "Node Label Type", and indicate they're partition. 2) <NO_LABEL> should be <DEFAULT_PARTITION>, <NO_LABEL> itself may not clear enough, also in Node Labels page, we should change <NO_LABEL> to the same name here. Changes in Node Labels page could add in separated JIRA. 3) It's better to indicate queue in hierarchy also, now we have: + Partition: xxx + root + a + b I think it's better to add a "Queue:" before queue name: + Partition: xxx + Queue: root + Queue: a + Queue: b Otherwise people may get confused which one is partition and which one is queue. 4) In queue's metrics overview table, it mixing partition-specific and queue-generic metrics is not clear enough, I'm not sure if you can show it like this (If existing Hamlet framework supports this or not): + Partition: xxx + Queue: root + Queue: a |--------- Partition Specific Metrics -----------| | Used Capacity | 0.0% | | Absolute Capacity | 50.0% | ........ |-------------------------------------------------| |--------- Queue General Metrics ----------------| | Other Metrics | --------------------------------------------------- Active user info: ........ Table .......... + Queue: b
          Hide
          vinodkv Vinod Kumar Vavilapalli added a comment -

          Can you please post the latest screenshot?

          Show
          vinodkv Vinod Kumar Vavilapalli added a comment - Can you please post the latest screenshot?
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 14m 40s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 javac 7m 33s There were no new javac warning messages.
          +1 javadoc 9m 33s There were no new javadoc warning messages.
          +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings.
          -1 checkstyle 0m 46s The applied patch generated 11 new checkstyle issues (total was 100, now 109).
          -1 whitespace 0m 2s The patch has 7 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 install 1m 33s mvn install still works.
          +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse.
          -1 findbugs 1m 17s The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings.
          -1 yarn tests 62m 56s Tests failed in hadoop-yarn-server-resourcemanager.
              99m 19s  



          Reason Tests
          FindBugs module:hadoop-yarn-server-resourcemanager
            Load of known null value in org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.CapacitySchedulerInfo.getQueues(CSQueue, String) At CapacitySchedulerInfo.java:in org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.CapacitySchedulerInfo.getQueues(CSQueue, String) At CapacitySchedulerInfo.java:[line 110]
          Timed out tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12730873/YARN-3362.20150506-1.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / 185e63a
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/7740/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/7740/artifact/patchprocess/whitespace.txt
          Findbugs warnings https://builds.apache.org/job/PreCommit-YARN-Build/7740/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
          hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/7740/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/7740/testReport/
          Java 1.7.0_55
          uname Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/7740/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 14m 40s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac 7m 33s There were no new javac warning messages. +1 javadoc 9m 33s There were no new javadoc warning messages. +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 0m 46s The applied patch generated 11 new checkstyle issues (total was 100, now 109). -1 whitespace 0m 2s The patch has 7 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 install 1m 33s mvn install still works. +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse. -1 findbugs 1m 17s The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings. -1 yarn tests 62m 56s Tests failed in hadoop-yarn-server-resourcemanager.     99m 19s   Reason Tests FindBugs module:hadoop-yarn-server-resourcemanager   Load of known null value in org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.CapacitySchedulerInfo.getQueues(CSQueue, String) At CapacitySchedulerInfo.java:in org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.CapacitySchedulerInfo.getQueues(CSQueue, String) At CapacitySchedulerInfo.java: [line 110] Timed out tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12730873/YARN-3362.20150506-1.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 185e63a checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/7740/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/7740/artifact/patchprocess/whitespace.txt Findbugs warnings https://builds.apache.org/job/PreCommit-YARN-Build/7740/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/7740/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/7740/testReport/ Java 1.7.0_55 uname Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/7740/console This message was automatically generated.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Tan, Wangda,
          Please find the attached patch for the labels (tested with 2 nodes cluster), and some pending work (may be can raise seperate jira to handle them as you suggested earlier)

          1. I feel along with the labels we can specify how much resources are available, as now we need to go node labels page and come back to get to know it
          2. Support showing show app-by-queue-by-label (Filter apps in a queue by label needs add additional CS interface,)
          3. active-user-info by queue by label (active apps, pending apps & USER AM resource limit)
          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda , Please find the attached patch for the labels (tested with 2 nodes cluster), and some pending work (may be can raise seperate jira to handle them as you suggested earlier) I feel along with the labels we can specify how much resources are available, as now we need to go node labels page and come back to get to know it Support showing show app-by-queue-by-label (Filter apps in a queue by label needs add additional CS interface,) active-user-info by queue by label (active apps, pending apps & USER AM resource limit)
          Hide
          leftnoteasy Wangda Tan added a comment -

          Well i understand that in the later patches we are targetting it more as partition than labels, but in that case shall i modify the same in other locations of WEB like node labels page, in CS page shall i mark it as Accessible Partitions ?

          Good point, I think we may need keep it to be label, and do the renaming in a separated patch.

          in CS page shall i mark it as Accessible Partitions

          We can keep calling it "label" avoid confusion.

          you mean if no node is mapped to cluster node label then not to show that Node Label ?

          In my mind is, show all node labels no matter they mapped to nodes/queues or not. We can optimize this easily in the future, I prefer to keep completed message before people post their comments.

          you mean the existing names of metrics entries needs to be appended with (Partition=xxx) and not to show both right ?

          I think we need to show both (partition-specific and queue general), the only change is append with (Node-Label=xxx).

          Its great to hear its working fine, but it worked without any modifications to the patch ?

          Forgot to mention, I modified patch a little bit, removed some avoid-displaying-checking mentioned by you at https://issues.apache.org/jira/browse/YARN-3362?focusedCommentId=14517364&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14517364.
          Uploading modified patch as well as CS config for you to test.

          Show
          leftnoteasy Wangda Tan added a comment - Well i understand that in the later patches we are targetting it more as partition than labels, but in that case shall i modify the same in other locations of WEB like node labels page, in CS page shall i mark it as Accessible Partitions ? Good point, I think we may need keep it to be label, and do the renaming in a separated patch. in CS page shall i mark it as Accessible Partitions We can keep calling it "label" avoid confusion. you mean if no node is mapped to cluster node label then not to show that Node Label ? In my mind is, show all node labels no matter they mapped to nodes/queues or not. We can optimize this easily in the future, I prefer to keep completed message before people post their comments. you mean the existing names of metrics entries needs to be appended with (Partition=xxx) and not to show both right ? I think we need to show both (partition-specific and queue general), the only change is append with (Node-Label=xxx). Its great to hear its working fine, but it worked without any modifications to the patch ? Forgot to mention, I modified patch a little bit, removed some avoid-displaying-checking mentioned by you at https://issues.apache.org/jira/browse/YARN-3362?focusedCommentId=14517364&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14517364 . Uploading modified patch as well as CS config for you to test.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks Tan, Wangda, for reviewing and testing the patch.

          partition=partition-name

          Well i understand that in the later patches we are targetting it more as partition than labels, but in that case shall i modify the same in other locations of WEB like node labels page, in CS page shall i mark it as Accessible Partitions ?

          But it's good to keep avoid showing "label" when there's no label in your cluster.

          you mean if no node is mapped to cluster node label then not to show that Node Label ?

          Showing partition of partition-specific queue metrics

          you mean the existing names of metrics entries needs to be appended with (Partition=xxx) and not to show both right ?

          It seems multi hierarchy works well in my environment.

          Its great to hear its working fine, but it worked without any modifications to the patch ? If so can you share offline your cluster setup (topology) with CS configuration, so that i can test it further.

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks Tan, Wangda , for reviewing and testing the patch. partition=partition-name Well i understand that in the later patches we are targetting it more as partition than labels, but in that case shall i modify the same in other locations of WEB like node labels page, in CS page shall i mark it as Accessible Partitions ? But it's good to keep avoid showing "label" when there's no label in your cluster. you mean if no node is mapped to cluster node label then not to show that Node Label ? Showing partition of partition-specific queue metrics you mean the existing names of metrics entries needs to be appended with (Partition=xxx) and not to show both right ? It seems multi hierarchy works well in my environment. Its great to hear its working fine, but it worked without any modifications to the patch ? If so can you share offline your cluster setup (topology) with CS configuration, so that i can test it further.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Naga,
          Thanks for taking initiative for this, just tried to run the patch locally, looks great! Some comments:

          1) Show partition=partition-name in every partition, if the partition is the NO_LABEL partition, show it's a YARN.DEFAULT.PARTITION.
          2) I think it's better to show labels are not accessible, especially for the non-exclusive node label case, we can optimize this in future patch. To avoid people ask question like "where is my label"? This includes all existing "avoid displaying" items in your existing patch. But it's good to keep avoid showing "label" when there's no label in your cluster.
          3) Showing partition of partition-specific queue metrics, they're:

          • Used Capacity: 0.0%
          • Absolute Used Capacity: 0.0%
          • Absolute Capacity: 50.0%
          • Absolute Max Capacity: 100.0%
          • Configured Capacity: 50.0%
          • Configured Max Capacity: 100.0%
            I suggest to add a (Partition=xxx) at the end of these metrics.

          I attached queue hierarchy showing in my local cluster: https://issues.apache.org/jira/secure/attachment/12729256/Screen%20Shot%202015-04-29%20at%2011.42.17%20AM.png. It seems multi hierarchy works well in my environment.

          Show
          leftnoteasy Wangda Tan added a comment - Hi Naga, Thanks for taking initiative for this, just tried to run the patch locally, looks great! Some comments: 1) Show partition=partition-name in every partition, if the partition is the NO_LABEL partition, show it's a YARN.DEFAULT.PARTITION. 2) I think it's better to show labels are not accessible, especially for the non-exclusive node label case, we can optimize this in future patch. To avoid people ask question like "where is my label"? This includes all existing "avoid displaying" items in your existing patch. But it's good to keep avoid showing "label" when there's no label in your cluster. 3) Showing partition of partition-specific queue metrics, they're: Used Capacity: 0.0% Absolute Used Capacity: 0.0% Absolute Capacity: 50.0% Absolute Max Capacity: 100.0% Configured Capacity: 50.0% Configured Max Capacity: 100.0% I suggest to add a (Partition=xxx) at the end of these metrics. I attached queue hierarchy showing in my local cluster: https://issues.apache.org/jira/secure/attachment/12729256/Screen%20Shot%202015-04-29%20at%2011.42.17%20AM.png . It seems multi hierarchy works well in my environment.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Tan, Wangda, Have uploaded WIP Patch, i have not done much of testing and one of the main reason being i am not able to get the hierarchy when multiple labels are present. Attaching the initial version of the patch which will help in providing feedback earlier. Hope some one can help me in understanding the foldable lists of hammock framework and review the modifications in CapacitySchedulerPage(line no 324 - 341) for it.
          Also one more thing, i have tried to avoid displaying

          • Labels which do not have any resource (no node mapping)
          • Labels which are not accessible
          • Sub queue hierarchies if the labels are not accessible to it.
          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda , Have uploaded WIP Patch, i have not done much of testing and one of the main reason being i am not able to get the hierarchy when multiple labels are present. Attaching the initial version of the patch which will help in providing feedback earlier. Hope some one can help me in understanding the foldable lists of hammock framework and review the modifications in CapacitySchedulerPage(line no 324 - 341) for it. Also one more thing, i have tried to avoid displaying Labels which do not have any resource (no node mapping) Labels which are not accessible Sub queue hierarchies if the labels are not accessible to it.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Since this patch tends to create a new RM web view, so I suggest to do it in a compatible way so that people will not get lost when using the new view:

          • When label is not enabled in the cluster or there's only "default" label, it will not show the "label-hierarchy", so it will be very similar to what we have in old RM web UI
          • All queue's capacities and usages (by partition) can be found in CSQueue.getQueueCapacities/getQueueResourceUsages.
          • We may not need to consider show app-by-partition with this patch. Filter apps in a queue by partition needs add additional CS interface, which should be addressed in a separated patch.
          Show
          leftnoteasy Wangda Tan added a comment - Since this patch tends to create a new RM web view, so I suggest to do it in a compatible way so that people will not get lost when using the new view: When label is not enabled in the cluster or there's only "default" label, it will not show the "label-hierarchy", so it will be very similar to what we have in old RM web UI All queue's capacities and usages (by partition) can be found in CSQueue.getQueueCapacities/getQueueResourceUsages. We may not need to consider show app-by-partition with this patch. Filter apps in a queue by partition needs add additional CS interface, which should be addressed in a separated patch.
          Hide
          leftnoteasy Wangda Tan added a comment -

          If this is the case then the approach which you specified makes sense but by "can" you mean currently its not there and in future it can come in ?

          Some of them are already existed, like user-limit, and some of them are coming, like am-resource-percent.

          Sorry I may not understand what's your question, user-limit and queue-limit are just two different limits regardless of node labels, sometimes user-limit higher and sometimes queue-limit higher. Could you explain what's your question (maybe by example)?

          Thanks,

          Show
          leftnoteasy Wangda Tan added a comment - If this is the case then the approach which you specified makes sense but by "can" you mean currently its not there and in future it can come in ? Some of them are already existed, like user-limit, and some of them are coming, like am-resource-percent. Sorry I may not understand what's your question, user-limit and queue-limit are just two different limits regardless of node labels, sometimes user-limit higher and sometimes queue-limit higher. Could you explain what's your question (maybe by example)? Thanks,
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks for the feedback Wangda Tan,

          different labels under same queue can have different user-limit/capacity/maximum-capacity/max-am-resource, etc.

          If this is the case then the approach which you specified makes sense but by "can" you mean currently its not there and in future it can come in ?

          More than repeated info, other drawback i can see is suppose for particular label userlimit is not reached but as overall at queue level if the user has reached his limit it will be difficult for user to go through all labels and find out whether user has reached queue limit . Correct me if my understanding on this is wrong .

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks for the feedback Wangda Tan , different labels under same queue can have different user-limit/capacity/maximum-capacity/max-am-resource, etc. If this is the case then the approach which you specified makes sense but by "can" you mean currently its not there and in future it can come in ? More than repeated info, other drawback i can see is suppose for particular label userlimit is not reached but as overall at queue level if the user has reached his limit it will be difficult for user to go through all labels and find out whether user has reached queue limit . Correct me if my understanding on this is wrong .
          Hide
          leftnoteasy Wangda Tan added a comment -

          For the active-user-info, we need some queue-user-by-label metrics as well, such as used-resource-by-user-and-label. Which can be placed to queue-label metrics table.

          Show
          leftnoteasy Wangda Tan added a comment - For the active-user-info, we need some queue-user-by-label metrics as well, such as used-resource-by-user-and-label. Which can be placed to queue-label metrics table.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Naganarasimha G R,
          Thanks for your comments,

          There will be some common queue metrics across the labels, wont it get repeated across for each label if a queue is mapped to multiple labels?

          Some common fields may get repeated (like absolute max capacity, etc.). I think repeat some of them is not a very big issue to me. I think we can show queue-label-metrics + queue-common-metrics for each queue-label

          IIUC most of the queue Metrics might not be specific to a label, like Capacity, Absolute max capacity, Max apps, Max AM's per user etc... . Correct me if my understanding on this is wrong.

          Yes, they're, but there're more parameters / metrics in queues for both label/queue, different labels under same queue can have different user-limit/capacity/maximum-capacity/max-am-resource, etc.). We also need to show them to users if possible

          Apart from the label specific queue metrics like (label capacity, label abs capacity,used) are there any new Label specific queue metrics you have in your mind ?

          I think above can answer your question.

          would it be better to list like

          If we have this view,
          1) How you show label-specific metrics?
          2) What's the "used-resource" in queue level means (used-resource make more sense when it's per-label).
          3) How to check "label-wise" resource usage for parent queues.

          Also if required we can have seperate page (/in the labels page/append at the end of CS page) like

          I think my proposal is still a little more clear, we need to show label-wise metrics to user. And with that, user can clear understand resource usage for each partition (just check each label's usage. Also parent's label-wise usage can show as well.

          Show
          leftnoteasy Wangda Tan added a comment - Hi Naganarasimha G R , Thanks for your comments, There will be some common queue metrics across the labels, wont it get repeated across for each label if a queue is mapped to multiple labels? Some common fields may get repeated (like absolute max capacity, etc.). I think repeat some of them is not a very big issue to me. I think we can show queue-label-metrics + queue-common-metrics for each queue-label IIUC most of the queue Metrics might not be specific to a label, like Capacity, Absolute max capacity, Max apps, Max AM's per user etc... . Correct me if my understanding on this is wrong. Yes, they're, but there're more parameters / metrics in queues for both label/queue, different labels under same queue can have different user-limit/capacity/maximum-capacity/max-am-resource, etc.). We also need to show them to users if possible Apart from the label specific queue metrics like (label capacity, label abs capacity,used) are there any new Label specific queue metrics you have in your mind ? I think above can answer your question. would it be better to list like If we have this view, 1) How you show label-specific metrics? 2) What's the "used-resource" in queue level means (used-resource make more sense when it's per-label). 3) How to check "label-wise" resource usage for parent queues. Also if required we can have seperate page (/in the labels page/append at the end of CS page) like I think my proposal is still a little more clear, we need to show label-wise metrics to user. And with that, user can clear understand resource usage for each partition (just check each label's usage. Also parent's label-wise usage can show as well.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks Tan, Wangda,
          Regarding the approach to display i had few concerns :

          • There will be some common queue metrics across the labels, wont it get repeated across for each label if a queue is mapped to multiple labels ?
          • IIUC most of the queue Metrics might not be specific to a label, like Capacity, Absolute max capacity, Max apps, Max AM's per user etc... . Correct me if my understanding on this is wrong.
          • Apart from the label specific queue metrics like (label capacity, label abs capacity,used) are there any new Label specific queue metrics you have in your mind ?
          • would it be better to list like
            + root [=====================] 30% used
              + a  [=======================================] 75% used
                + a1 [=================]  30% used
            	       ---------------------------------------------
            	      |          Queue Metrics                     |
            	      |--------------------------------------------|
            	      |       metrics1       |            value1   |
            	      |       metrics2       |            value2   |
                       ---------------------------------------------
            	      |          Active Users info  (yarn-3273)    |
            	      |--------------------------------------------|
            	      |       user1       |            info        |
            	      |       user2       |            info        |
                       ---------------------------------------------
            	      |     Label Resource usage info              |
            	      |--------------------------------------------|
            	      | label_x  [=====================] 30% used  |
            	      | label_y  [================] 20% used       |
            	      ----------------------------------------------
                + a2 [=================]  30% used
                ...
            
          • Also if required we can have seperate page (/in the labels page/append at the end of CS page) like :
            + label_x  [=====================] 30% used [Actual Resource - Used resource ]
            	+ root [=====================] 30% used [Actual Resource - Used resource ]
            	  + a  [=======================================] 75% used [Actual Resource - Used resource ]
            	    + a1 [=================]  30% used [Actual Resource - Used resource ]
            + label_y
                + root [...]
                + ...
            + label_z
                + root [...]
            

          YARN-3273, has added more info to the CS page so we need to consider the size of page and its usability.
          Please provide your thoughts on the same

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks Tan, Wangda , Regarding the approach to display i had few concerns : There will be some common queue metrics across the labels, wont it get repeated across for each label if a queue is mapped to multiple labels ? IIUC most of the queue Metrics might not be specific to a label, like Capacity, Absolute max capacity, Max apps, Max AM's per user etc... . Correct me if my understanding on this is wrong. Apart from the label specific queue metrics like (label capacity, label abs capacity,used) are there any new Label specific queue metrics you have in your mind ? would it be better to list like + root [=====================] 30% used + a [=======================================] 75% used + a1 [=================] 30% used --------------------------------------------- | Queue Metrics | |--------------------------------------------| | metrics1 | value1 | | metrics2 | value2 | --------------------------------------------- | Active Users info (yarn-3273) | |--------------------------------------------| | user1 | info | | user2 | info | --------------------------------------------- | Label Resource usage info | |--------------------------------------------| | label_x [=====================] 30% used | | label_y [================] 20% used | ---------------------------------------------- + a2 [=================] 30% used ... Also if required we can have seperate page (/in the labels page/append at the end of CS page) like : + label_x [=====================] 30% used [Actual Resource - Used resource ] + root [=====================] 30% used [Actual Resource - Used resource ] + a [=======================================] 75% used [Actual Resource - Used resource ] + a1 [=================] 30% used [Actual Resource - Used resource ] + label_y + root [...] + ... + label_z + root [...] YARN-3273 , has added more info to the CS page so we need to consider the size of page and its usability. Please provide your thoughts on the same
          Hide
          leftnoteasy Wangda Tan added a comment -

          It's yours . Looking forward your patch.

          Thanks,

          Show
          leftnoteasy Wangda Tan added a comment - It's yours . Looking forward your patch. Thanks,
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Wangda,
          Would like to work on this issue, hence have assigned to my name, if you have already started working on it please feel free to reassign.

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Wangda, Would like to work on this issue, hence have assigned to my name, if you have already started working on it please feel free to reassign.
          Hide
          leftnoteasy Wangda Tan added a comment -

          My proposal is:

          For now, RM CapacityScheduler UI looks like

          + root [==========================] 50% used
            + a  [=======================================] 75% used
              - a1 [=================]  30% used
                ---------------------------------------------
                |          Queue Metrics Table               |
                |--------------------------------------------|
                |       metrics1       |            value1   |
                |       metrics2       |            value2   |
                |       metrics3       |            value3   |
                |       metrics4       |            value4   |
                ----------------------------------------------
            + b [...]
            + c [...]
          

          We can add one more hierarchy above queue's hierarchy, which are labels can be accessed and/or labels are being used by the queue, which can looks like

          + label_x  [=====================] 30% used
          	+ root [=====================] 30% used
          	  + a  [=======================================] 75% used
          	    + a1 [=================]  30% used
          	      ---------------------------------------------
          	      |          Queue Metrics Table (For label_x) |
          	      |--------------------------------------------|
          	      |       metrics1       |            value1   |
          	      |       metrics2       |            value2   |
          	      |       metrics3       |            value3   |
          	      |       metrics4       |            value4   |
          	      ----------------------------------------------
          + label_y
              + root [...]
              + ...
          + label_z
              + root [...]
              + ...
          + no_label
              + root [...]
              + ...
          

          To make it backward compatible, when there's no label in the system, it will not show "label-bar", and root is still "root-queue".

          Please feel free to share your ideas on this!

          Show
          leftnoteasy Wangda Tan added a comment - My proposal is: For now, RM CapacityScheduler UI looks like + root [==========================] 50% used + a [=======================================] 75% used - a1 [=================] 30% used --------------------------------------------- | Queue Metrics Table | |--------------------------------------------| | metrics1 | value1 | | metrics2 | value2 | | metrics3 | value3 | | metrics4 | value4 | ---------------------------------------------- + b [...] + c [...] We can add one more hierarchy above queue's hierarchy, which are labels can be accessed and/or labels are being used by the queue, which can looks like + label_x [=====================] 30% used + root [=====================] 30% used + a [=======================================] 75% used + a1 [=================] 30% used --------------------------------------------- | Queue Metrics Table (For label_x) | |--------------------------------------------| | metrics1 | value1 | | metrics2 | value2 | | metrics3 | value3 | | metrics4 | value4 | ---------------------------------------------- + label_y + root [...] + ... + label_z + root [...] + ... + no_label + root [...] + ... To make it backward compatible, when there's no label in the system, it will not show "label-bar", and root is still "root-queue". Please feel free to share your ideas on this!

            People

            • Assignee:
              Naganarasimha Naganarasimha G R
              Reporter:
              leftnoteasy Wangda Tan
            • Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development