Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-2492 (Clone of YARN-796) Allow for (admin) labels on nodes and resource-requests
  3. YARN-4304

AM max resource configuration per partition to be displayed/updated correctly in UI and in various partition related metrics

    Details

    • Type: Sub-task
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.7.1
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: webapp
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      As we are supporting per-partition level max AM resource percentage configuration, UI and various metrics also need to display correct configurations related to same.
      For eg: Current UI still shows am-resource percentage per queue level. This is to be updated correctly when label config is used.

      • Display max-am-percentage per-partition in Scheduler UI (label also) and in ClusterMetrics page
      • Update queue/partition related metrics w.r.t per-partition am-resource-percentage
      1. 0001-YARN-4304.patch
        19 kB
        Sunil G
      2. 0002-YARN-4304.patch
        31 kB
        Sunil G
      3. 0003-YARN-4304.patch
        25 kB
        Sunil G
      4. 0004-YARN-4304.patch
        40 kB
        Sunil G
      5. 0005-YARN-4304.patch
        38 kB
        Sunil G
      6. 0005-YARN-4304.patch
        37 kB
        Sunil G
      7. 0006-YARN-4304.patch
        40 kB
        Sunil G
      8. 0007-YARN-4304.patch
        45 kB
        Sunil G
      9. 0008-YARN-4304.patch
        46 kB
        Sunil G
      10. 0009-YARN-4304.patch
        48 kB
        Sunil G
      11. 0010-YARN-4304.patch
        48 kB
        Sunil G
      12. 0011-YARN-4304.modified.patch
        49 kB
        Wangda Tan
      13. 0011-YARN-4304.patch
        49 kB
        Sunil G
      14. REST_and_UI.zip
        192 kB
        Sunil G

        Issue Links

          Activity

          Hide
          bibinchundatt Bibin A Chundatt added a comment -

          Hi Sunil G
          Cluster metrics also needs updation along with Schedule page .
          Currently the Total Memory & Total Vcores in Cluster metrics are showing only DEFAULT_PARTITION resource should i raise seperate JIRA for the same?

          Show
          bibinchundatt Bibin A Chundatt added a comment - Hi Sunil G Cluster metrics also needs updation along with Schedule page . Currently the Total Memory & Total Vcores in Cluster metrics are showing only DEFAULT_PARTITION resource should i raise seperate JIRA for the same?
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Bibin A Chundatt, good point i think we can sum it up and show along with this patch, as it small change better to be done here itself, what do you say Sunil G ?

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Bibin A Chundatt , good point i think we can sum it up and show along with this patch, as it small change better to be done here itself, what do you say Sunil G ?
          Hide
          bibinchundatt Bibin A Chundatt added a comment -

          Not a problem at all.
          Sunil G please do consider metrics too.

          Show
          bibinchundatt Bibin A Chundatt added a comment - Not a problem at all. Sunil G please do consider metrics too.
          Hide
          sunilg Sunil G added a comment -

          Thanks Bibin A Chundatt for pointing out. As Naga mentioned, I will handle this case also in this patch.

          Show
          sunilg Sunil G added a comment - Thanks Bibin A Chundatt for pointing out. As Naga mentioned, I will handle this case also in this patch.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Also i think we can convert this into subtask of YARN-2492 as its not a bug and you just added the functionality of partition specific AM resource.

          Show
          Naganarasimha Naganarasimha G R added a comment - Also i think we can convert this into subtask of YARN-2492 as its not a bug and you just added the functionality of partition specific AM resource.
          Hide
          sunilg Sunil G added a comment -

          Thanks Naga. Marked as sub task.

          Show
          sunilg Sunil G added a comment - Thanks Naga. Marked as sub task.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Thanks Sunil G opening this, +1 for doing this and addressing cluster metrics as well.

          I would appreciate if you can take a look at other cluster/queue metrics to see if there's any other partition-related metrics need to be fixed. And could you update title of this JIRA?

          Show
          leftnoteasy Wangda Tan added a comment - Thanks Sunil G opening this, +1 for doing this and addressing cluster metrics as well. I would appreciate if you can take a look at other cluster/queue metrics to see if there's any other partition-related metrics need to be fixed. And could you update title of this JIRA?
          Hide
          sunilg Sunil G added a comment -

          Uploading an initial version of patch.

          • For cluster metrics, I still followed the approach of available+allocated to get total resource of queue/cluster. So whenever a resource update happens, I think we need to recalculate the availableMB (cores) based on all partitions in that queue. This patch is now as per this idea.
          • In Scheduler page, all AM resource related fix is done. I will attach a screen shot soon.

          Kindly help to check the patch.

          Show
          sunilg Sunil G added a comment - Uploading an initial version of patch. For cluster metrics, I still followed the approach of available+allocated to get total resource of queue/cluster. So whenever a resource update happens, I think we need to recalculate the availableMB (cores) based on all partitions in that queue. This patch is now as per this idea. In Scheduler page, all AM resource related fix is done. I will attach a screen shot soon. Kindly help to check the patch.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Sunil G.

          1) CSQueueUtils calculate max available resource to queue may not correct to me:
          In my mind, it should be:

          available-resource = Σ (componentwiseMax(p.max-resource - p.used, 0))
                               p ∈ {accessible-partitions-of-queue}
          

          We should only consider accessible-partitions, and if a partition's usage more than max-resource, we should consider it's 0 instead of a negative value.

          2) Looked at REST/web API changes as well, seems good to me. Could you attach screenshot/REST-response as well?

          Show
          leftnoteasy Wangda Tan added a comment - Hi Sunil G . 1) CSQueueUtils calculate max available resource to queue may not correct to me: In my mind, it should be: available-resource = Σ (componentwiseMax(p.max-resource - p.used, 0)) p ∈ {accessible-partitions-of-queue} We should only consider accessible-partitions, and if a partition's usage more than max-resource, we should consider it's 0 instead of a negative value. 2) Looked at REST/web API changes as well, seems good to me. Could you attach screenshot/REST-response as well?
          Hide
          sunilg Sunil G added a comment -

          Thanks Wangda Tan.
          Yes, this looks fine for me. Please let me make changes and write some more testcases to see metrics are coming as desired. I will share the screen shots /REST api with the updated patch shortly.

          Show
          sunilg Sunil G added a comment - Thanks Wangda Tan . Yes, this looks fine for me. Please let me make changes and write some more testcases to see metrics are coming as desired. I will share the screen shots /REST api with the updated patch shortly.
          Hide
          bibinchundatt Bibin A Chundatt added a comment -

          Hi Sunil G

          1. Could you also check the memory total when container reservation is done for NM.
          Show
          bibinchundatt Bibin A Chundatt added a comment - Hi Sunil G Could you also check the memory total when container reservation is done for NM.
          Hide
          sunilg Sunil G added a comment -

          Bibin A Chundatt, Memory Reserved is already a part of ClusterMetrics. Could you please help to explain what you intended to add here?
          As part of this ticket, I will surely verify all cluster metrics with and w/o label.

          Show
          sunilg Sunil G added a comment - Bibin A Chundatt , Memory Reserved is already a part of ClusterMetrics. Could you please help to explain what you intended to add here? As part of this ticket, I will surely verify all cluster metrics with and w/o label.
          Hide
          sunilg Sunil G added a comment -

          Looks like YARN-3432 is handling the issue for cluster metrics for Reserved Memory. So I will not make changes here for reserved metrics. I will try help to review this scenario in YARN-3432. Thanks Bibin A Chundatt for pointing out.

          Show
          sunilg Sunil G added a comment - Looks like YARN-3432 is handling the issue for cluster metrics for Reserved Memory. So I will not make changes here for reserved metrics. I will try help to review this scenario in YARN-3432 . Thanks Bibin A Chundatt for pointing out.
          Hide
          sunilg Sunil G added a comment -

          Attaching REST o/p and UI screen shots.

          Show
          sunilg Sunil G added a comment - Attaching REST o/p and UI screen shots.
          Hide
          sunilg Sunil G added a comment -

          Attaching updated version of patch addressing the comments.

          • getAccessibleNodeLabels can have "*". hence in case when its ANY, we need to consider cluster labels (we can try see what all labels are having resources in that queue at that time). Patch contains this change
          • It seems there was an existing bug in showing capacities for REST api /ws/v1/cluster/scheduler o/p when labels were enabled. Currently it shows only default label. Changes in CapacitySchedulerQueueInfo. I handled this fix also here, if needed I can spin to another ticket as its for nodelabels for general case. pls advise.

          Wangda Tan could you please help to check the patch.

          Show
          sunilg Sunil G added a comment - Attaching updated version of patch addressing the comments. getAccessibleNodeLabels can have "*". hence in case when its ANY, we need to consider cluster labels (we can try see what all labels are having resources in that queue at that time). Patch contains this change It seems there was an existing bug in showing capacities for REST api /ws/v1/cluster/scheduler o/p when labels were enabled. Currently it shows only default label. Changes in CapacitySchedulerQueueInfo . I handled this fix also here, if needed I can spin to another ticket as its for nodelabels for general case. pls advise. Wangda Tan could you please help to check the patch.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 11m 58s trunk passed
          +1 compile 0m 58s trunk passed with JDK v1.8.0_66
          +1 compile 0m 48s trunk passed with JDK v1.7.0_85
          +1 checkstyle 0m 20s trunk passed
          +1 mvnsite 0m 56s trunk passed
          +1 mvneclipse 0m 22s trunk passed
          +1 findbugs 1m 52s trunk passed
          +1 javadoc 0m 43s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 39s trunk passed with JDK v1.7.0_85
          +1 mvninstall 0m 52s the patch passed
          +1 compile 0m 59s the patch passed with JDK v1.8.0_66
          +1 javac 0m 59s the patch passed
          +1 compile 0m 45s the patch passed with JDK v1.7.0_85
          +1 javac 0m 45s the patch passed
          -1 checkstyle 0m 18s Patch generated 9 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 244, now 222).
          +1 mvnsite 0m 53s the patch passed
          +1 mvneclipse 0m 20s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          -1 findbugs 2m 3s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager introduced 2 new FindBugs issues.
          +1 javadoc 0m 43s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 40s the patch passed with JDK v1.7.0_85
          -1 unit 73m 48s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 72m 21s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_85.
          +1 asflicense 0m 32s Patch does not generate ASF License warnings.
          174m 35s



          Reason Tests
          FindBugs module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
            Dead store to nodeLabels in org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.getNodeLabelsForQueue() At AbstractCSQueue.java:org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.getNodeLabelsForQueue() At AbstractCSQueue.java:[line 618]
            Dead store to nodeLabelInfo in org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueUsersInfoBlock.render(HtmlBlock$Block) At CapacitySchedulerPage.java:org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueUsersInfoBlock.render(HtmlBlock$Block) At CapacitySchedulerPage.java:[line 215]
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched
            hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler
          JDK v1.7.0_85 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing
            hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched
            hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12774139/0002-YARN-4304.patch
          JIRA Issue YARN-4304
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux e1ece6ae4723 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / db4cab2
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9783/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          findbugs https://builds.apache.org/job/PreCommit-YARN-Build/9783/artifact/patchprocess/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9783/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9783/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9783/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9783/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt
          JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9783/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 76MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9783/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 11m 58s trunk passed +1 compile 0m 58s trunk passed with JDK v1.8.0_66 +1 compile 0m 48s trunk passed with JDK v1.7.0_85 +1 checkstyle 0m 20s trunk passed +1 mvnsite 0m 56s trunk passed +1 mvneclipse 0m 22s trunk passed +1 findbugs 1m 52s trunk passed +1 javadoc 0m 43s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 39s trunk passed with JDK v1.7.0_85 +1 mvninstall 0m 52s the patch passed +1 compile 0m 59s the patch passed with JDK v1.8.0_66 +1 javac 0m 59s the patch passed +1 compile 0m 45s the patch passed with JDK v1.7.0_85 +1 javac 0m 45s the patch passed -1 checkstyle 0m 18s Patch generated 9 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 244, now 222). +1 mvnsite 0m 53s the patch passed +1 mvneclipse 0m 20s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. -1 findbugs 2m 3s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager introduced 2 new FindBugs issues. +1 javadoc 0m 43s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 40s the patch passed with JDK v1.7.0_85 -1 unit 73m 48s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 72m 21s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_85. +1 asflicense 0m 32s Patch does not generate ASF License warnings. 174m 35s Reason Tests FindBugs module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager   Dead store to nodeLabels in org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.getNodeLabelsForQueue() At AbstractCSQueue.java:org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.getNodeLabelsForQueue() At AbstractCSQueue.java: [line 618]   Dead store to nodeLabelInfo in org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueUsersInfoBlock.render(HtmlBlock$Block) At CapacitySchedulerPage.java:org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueUsersInfoBlock.render(HtmlBlock$Block) At CapacitySchedulerPage.java: [line 215] JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler JDK v1.7.0_85 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12774139/0002-YARN-4304.patch JIRA Issue YARN-4304 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux e1ece6ae4723 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / db4cab2 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9783/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt findbugs https://builds.apache.org/job/PreCommit-YARN-Build/9783/artifact/patchprocess/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html unit https://builds.apache.org/job/PreCommit-YARN-Build/9783/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9783/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9783/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9783/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9783/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 76MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9783/console This message was automatically generated.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Sunil,
          Thanks for working on this patch !
          Few comments :

          1. In the web page Max application(AM) resources for a user and queue should it not be at partition if mentioned per partition ?
          2. Web rendering code formatting might not be needed as i think it was kept earlier for better readibility, Thoughts?
          3. existing bug in showing capacities for REST api /ws/v1/cluster/scheduler o/p when labels were enabled. Currently it shows only default label. Actually CapacitySchedulerQueueInfo already has QueueCapacitiesInfo which encapsulates this list. and its encapsulted into one more object for better readibility (as done in other places). Also we should not delete any existing fields as it might break the compatability, because of this at the outer layer we have kept the same (capacities based on default Label) and added new fields for getting capacties of all labels

          Have some queries on other parts once i get to analyze more will post again.

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Sunil, Thanks for working on this patch ! Few comments : In the web page Max application(AM) resources for a user and queue should it not be at partition if mentioned per partition ? Web rendering code formatting might not be needed as i think it was kept earlier for better readibility, Thoughts? existing bug in showing capacities for REST api /ws/v1/cluster/scheduler o/p when labels were enabled. Currently it shows only default label. Actually CapacitySchedulerQueueInfo already has QueueCapacitiesInfo which encapsulates this list. and its encapsulted into one more object for better readibility (as done in other places). Also we should not delete any existing fields as it might break the compatability, because of this at the outer layer we have kept the same (capacities based on default Label) and added new fields for getting capacties of all labels Have some queries on other parts once i get to analyze more will post again.
          Hide
          sunilg Sunil G added a comment -

          Thanks Naga for the comments.

          1.In the web page Max application(AM) resources for a user and queue should it not be at partition if mentioned per partition ?

          Scheduler UI displays queue inside each partition. So these resources will be per-queue per-partition. Hence we need not have to change label explicitly stating per-partition. Make sense?

          Web rendering code formatting might not be needed

          Yes, I also thought so earlier before doing that. But i think some times its becoming difficult when u need add a line which cross 80 lines limit. I can change back old way if its the general view.

          existing bug in showing capacities for REST api /ws/v1/cluster/scheduler o/p when labels were enabled.

          As of now if we see REST o/p for scheduler when used with labels, we can easily see usedCapacity is 0 (also others capacities). If its kept for default label , we need to make changes in variable name atleast else it will give a wrong meaning and wrong o/p. Yes, for cases where labels are not configured, this breaks compatibility.
          I think I will segregate that issue from this and will raise a new ticket because this ticket is getting focusing on multiple issues. And all related discussions for this can be moved there. Will file a new ticket shortly.

          Show
          sunilg Sunil G added a comment - Thanks Naga for the comments. 1.In the web page Max application(AM) resources for a user and queue should it not be at partition if mentioned per partition ? Scheduler UI displays queue inside each partition. So these resources will be per-queue per-partition. Hence we need not have to change label explicitly stating per-partition. Make sense? Web rendering code formatting might not be needed Yes, I also thought so earlier before doing that. But i think some times its becoming difficult when u need add a line which cross 80 lines limit. I can change back old way if its the general view. existing bug in showing capacities for REST api /ws/v1/cluster/scheduler o/p when labels were enabled. As of now if we see REST o/p for scheduler when used with labels, we can easily see usedCapacity is 0 (also others capacities). If its kept for default label , we need to make changes in variable name atleast else it will give a wrong meaning and wrong o/p. Yes, for cases where labels are not configured, this breaks compatibility. I think I will segregate that issue from this and will raise a new ticket because this ticket is getting focusing on multiple issues. And all related discussions for this can be moved there. Will file a new ticket shortly.
          Hide
          sunilg Sunil G added a comment -

          Updating new patch addressing findbugs/checkstyle issues. Also had an offline chat with Rohith Sharma K S regarding formatting in render code. I think a general consent is like, not to do formatting as whole code is kept ordered for easy understandability, hence formatting only the lines which I added.

          Show
          sunilg Sunil G added a comment - Updating new patch addressing findbugs/checkstyle issues. Also had an offline chat with Rohith Sharma K S regarding formatting in render code. I think a general consent is like, not to do formatting as whole code is kept ordered for easy understandability, hence formatting only the lines which I added.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks for the quick response Sunil G

          Scheduler UI displays queue inside each partition. So these resources will be per-queue per-partition. Hence we need not have to change label explicitly stating per-partition. Make sense

          No, actually if you see we represent queue status in two blocks, so as to capture which queue information is for specific to label and which information is generic to queue under the headings queue Status for Partition and queue Status

          As of now if we see REST o/p for scheduler when used with labels, we can easily see usedCapacity is 0 (also others capacities). If its kept for default label , we need to make changes in variable name atleast else it will give a wrong meaning and wrong o/p.

          Yeah agree with it but assume the case where in node labels is not enabled and existing user is using it, in that case it will be break in compatability, Based on discussions with Tan, Wangda's in YARN-4162 i had done the modifications in that way. refer comment So basically you can discuss once with Wangda and then if we conclude on it then we can rework on YARN-4162 instead !
          Also in some other places you have used the list directly we have encapsualted inside a class so that the structure of XML output is better. As per comment from Bibin A Chundatt

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks for the quick response Sunil G Scheduler UI displays queue inside each partition. So these resources will be per-queue per-partition. Hence we need not have to change label explicitly stating per-partition. Make sense No, actually if you see we represent queue status in two blocks, so as to capture which queue information is for specific to label and which information is generic to queue under the headings queue Status for Partition and queue Status As of now if we see REST o/p for scheduler when used with labels, we can easily see usedCapacity is 0 (also others capacities). If its kept for default label , we need to make changes in variable name atleast else it will give a wrong meaning and wrong o/p. Yeah agree with it but assume the case where in node labels is not enabled and existing user is using it, in that case it will be break in compatability, Based on discussions with Tan, Wangda 's in YARN-4162 i had done the modifications in that way. refer comment So basically you can discuss once with Wangda and then if we conclude on it then we can rework on YARN-4162 instead ! Also in some other places you have used the list directly we have encapsualted inside a class so that the structure of XML output is better. As per comment from Bibin A Chundatt
          Hide
          sunilg Sunil G added a comment -

          Thank you for the comments.

          No, actually if you see we represent queue status in two blocks:

          I didnt understand your earlier comments what you were intending, Hence gave the clarification that we do not need to change label. As I see it, you were trying to mention about grouping or moving partition level fields under queue Status for Partition. Thats fine, I will update that.

          Yeah agree with it but assume the the case where in node labels is not enabled

          Show
          sunilg Sunil G added a comment - Thank you for the comments. No, actually if you see we represent queue status in two blocks: I didnt understand your earlier comments what you were intending, Hence gave the clarification that we do not need to change label. As I see it, you were trying to mention about grouping or moving partition level fields under queue Status for Partition . Thats fine, I will update that. Yeah agree with it but assume the the case where in node labels is not enabled
          Hide
          sunilg Sunil G added a comment -

          Thank you for the comments. Sorry for the partial reply earlier.

          No, actually if you see we represent queue status in two blocks:

          I didnt understand your earlier comments what you were intending, Hence gave the clarification that we do not need to change label. As I see it, you were trying to mention about grouping or moving partition level fields under queue Status for Partition. Thats fine for me, I will update that way.

          Yeah agree with it but assume the the case where in node labels is not enabled

          I have seen those comments, and sorry that I missed that discussions some how. I will sync with Wangda Tan offline and will bring up the conclusion here. As needed I feel it can be handled in new ticket since 4162 is committed.

          Also in some other places you have used the list directly

          Yes. I have used directly. I think as we have encapsulated class, I will make changes.

          Show
          sunilg Sunil G added a comment - Thank you for the comments. Sorry for the partial reply earlier. No, actually if you see we represent queue status in two blocks: I didnt understand your earlier comments what you were intending, Hence gave the clarification that we do not need to change label. As I see it, you were trying to mention about grouping or moving partition level fields under queue Status for Partition. Thats fine for me, I will update that way. Yeah agree with it but assume the the case where in node labels is not enabled I have seen those comments, and sorry that I missed that discussions some how. I will sync with Wangda Tan offline and will bring up the conclusion here. As needed I feel it can be handled in new ticket since 4162 is committed. Also in some other places you have used the list directly Yes. I have used directly. I think as we have encapsulated class, I will make changes.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Sunil G,

          As needed I feel it can be handled in new ticket since 4162 is committed.

          No issues in working on new jira but my only intention is to get the REST o/p corrected ppl need to get 2 jira's merged and YARN-4162 was recently merged and not gone into any versions. But anyway first lets gets Tan, Wangda's thoughts on this.

          Yes. I have used directly. I think as we have encapsulated class, I will make changes.

          Reason being that the objects in xml come one after the other and is not so readable. I was referring to the changes in CapacitySchedulerLeafQueueInfo

          Also check how better we can restructure the new class PartitionResourceInfo or resuse the existing classes like PartitionResourceUsageInfo introduced in YARN-4162

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Sunil G , As needed I feel it can be handled in new ticket since 4162 is committed. No issues in working on new jira but my only intention is to get the REST o/p corrected ppl need to get 2 jira's merged and YARN-4162 was recently merged and not gone into any versions. But anyway first lets gets Tan, Wangda 's thoughts on this. Yes. I have used directly. I think as we have encapsulated class, I will make changes. Reason being that the objects in xml come one after the other and is not so readable. I was referring to the changes in CapacitySchedulerLeafQueueInfo Also check how better we can restructure the new class PartitionResourceInfo or resuse the existing classes like PartitionResourceUsageInfo introduced in YARN-4162
          Hide
          sunilg Sunil G added a comment -

          I understand the ordering readability in xml format. So a common class like PartitionResourcesInfo can encapsulate this list of PartitionResourceInfo object something what we have already done like NodeLabelInfo and NodeLabelsInfo. I will make necessary changes, and also will wait for Wangda Tan to take a look again as he has reviewed this part earlier.

          Show
          sunilg Sunil G added a comment - I understand the ordering readability in xml format. So a common class like PartitionResourcesInfo can encapsulate this list of PartitionResourceInfo object something what we have already done like NodeLabelInfo and NodeLabelsInfo . I will make necessary changes, and also will wait for Wangda Tan to take a look again as he has reviewed this part earlier.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 11m 31s trunk passed
          +1 compile 1m 0s trunk passed with JDK v1.8.0_66
          +1 compile 0m 48s trunk passed with JDK v1.7.0_85
          +1 checkstyle 0m 20s trunk passed
          +1 mvnsite 0m 54s trunk passed
          +1 mvneclipse 0m 20s trunk passed
          +1 findbugs 1m 47s trunk passed
          +1 javadoc 0m 39s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 36s trunk passed with JDK v1.7.0_85
          +1 mvninstall 0m 50s the patch passed
          +1 compile 0m 55s the patch passed with JDK v1.8.0_66
          +1 javac 0m 55s the patch passed
          +1 compile 0m 43s the patch passed with JDK v1.7.0_85
          +1 javac 0m 43s the patch passed
          -1 checkstyle 0m 19s Patch generated 5 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 224, now 225).
          +1 mvnsite 0m 53s the patch passed
          +1 mvneclipse 0m 19s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 59s the patch passed
          +1 javadoc 0m 39s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 38s the patch passed with JDK v1.7.0_85
          -1 unit 76m 4s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 73m 1s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_85.
          +1 asflicense 0m 37s Patch does not generate ASF License warnings.
          176m 28s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority
            hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.scheduler.fair.TestSchedulingPolicy
          JDK v1.7.0_85 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12774266/0003-YARN-4304.patch
          JIRA Issue YARN-4304
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 38daf0ac011a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / b4c6b51
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9790/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9790/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9790/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9790/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9790/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt
          JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9790/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 76MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9790/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 11m 31s trunk passed +1 compile 1m 0s trunk passed with JDK v1.8.0_66 +1 compile 0m 48s trunk passed with JDK v1.7.0_85 +1 checkstyle 0m 20s trunk passed +1 mvnsite 0m 54s trunk passed +1 mvneclipse 0m 20s trunk passed +1 findbugs 1m 47s trunk passed +1 javadoc 0m 39s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 36s trunk passed with JDK v1.7.0_85 +1 mvninstall 0m 50s the patch passed +1 compile 0m 55s the patch passed with JDK v1.8.0_66 +1 javac 0m 55s the patch passed +1 compile 0m 43s the patch passed with JDK v1.7.0_85 +1 javac 0m 43s the patch passed -1 checkstyle 0m 19s Patch generated 5 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 224, now 225). +1 mvnsite 0m 53s the patch passed +1 mvneclipse 0m 19s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 59s the patch passed +1 javadoc 0m 39s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 38s the patch passed with JDK v1.7.0_85 -1 unit 76m 4s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 73m 1s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_85. +1 asflicense 0m 37s Patch does not generate ASF License warnings. 176m 28s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.scheduler.fair.TestSchedulingPolicy JDK v1.7.0_85 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12774266/0003-YARN-4304.patch JIRA Issue YARN-4304 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 38daf0ac011a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / b4c6b51 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9790/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9790/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9790/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9790/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9790/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9790/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 76MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9790/console This message was automatically generated.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Thanks for working on this Sunil G and review comments from Naganarasimha G R.

          My thoughts/comments for output, haven't went through the patch.

          1) I found you have RM node REST response in the zip, did you change any RMNodes related info?
          2) Instead of putting max am to a separated object, I would prefer to put them to existing resourceUsageByPartition instead of introducing a new object. Even though max-am-limit is not usage, but it describes upper bound of usage. Thoughts?
          3) And similarly, am-limit should put to queue capacities as well, now max_am_perc is a part of QueueCapacities.
          4) When label is configured, max-am-percent and max-am-resource should in the above table of the RM UI (above table is for partition-specific properties and below table is for other general propoerties).
          5) Is there any change to cluster metrics web UI?

          Show
          leftnoteasy Wangda Tan added a comment - Thanks for working on this Sunil G and review comments from Naganarasimha G R . My thoughts/comments for output, haven't went through the patch. 1) I found you have RM node REST response in the zip, did you change any RMNodes related info? 2) Instead of putting max am to a separated object, I would prefer to put them to existing resourceUsageByPartition instead of introducing a new object. Even though max-am-limit is not usage, but it describes upper bound of usage. Thoughts? 3) And similarly, am-limit should put to queue capacities as well, now max_am_perc is a part of QueueCapacities. 4) When label is configured, max-am-percent and max-am-resource should in the above table of the RM UI (above table is for partition-specific properties and below table is for other general propoerties). 5) Is there any change to cluster metrics web UI?
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks Tan, Wangda for sharing your thoughts,

          Instead of putting max am to a separated object, I would prefer to put them to existing resourceUsageByPartition instead of introducing a new object. Even though max-am-limit is not usage, but it describes upper bound of usage. Thoughts?

          By resourceUsageByPartition you refer to ResourceUsageInfo present in the parent class CapacitySchedulerQueueInfo, if so yes and any way as these new classes have not gone into any version yet, so we can rename them as appropriately like resourceUsagesByPartition => resourceInfoByPartition, ResourceUsageInfo => ResourceInfo & PartitionResourceUsageInfo => PartitionResourceInfo. Thoughts ?

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks Tan, Wangda for sharing your thoughts, Instead of putting max am to a separated object, I would prefer to put them to existing resourceUsageByPartition instead of introducing a new object. Even though max-am-limit is not usage, but it describes upper bound of usage. Thoughts? By resourceUsageByPartition you refer to ResourceUsageInfo present in the parent class CapacitySchedulerQueueInfo , if so yes and any way as these new classes have not gone into any version yet, so we can rename them as appropriately like resourceUsagesByPartition => resourceInfoByPartition , ResourceUsageInfo => ResourceInfo & PartitionResourceUsageInfo => PartitionResourceInfo . Thoughts ?
          Hide
          sunilg Sunil G added a comment -

          Thank You Wangda Tan and Naga for the comments.

          1) I found you have RM node REST response in the zip, did you change any RMNodes related info?
          5) Is there any change to cluster metrics web UI?
          >> Yes. Available resource were not calculated correctly when labels were used and this was displayed in cluster metrics. I changed that code also CSQueueUtils, hence attached UI snapshot and REST o/p nodes to show the updated Available resources metric.

          2) Instead of putting max am to a separated object, I would prefer to put them to existing resourceUsageByPartition instead of introducing a new object. Even though max-am-limit is not usage, but it describes upper bound of usage. Thoughts?
          3) And similarly, am-limit should put to queue capacities as well, now max_am_perc is a part of QueueCapacities.
          >> I agree that we could use this way, as its also an upper bound usage. So a grouping will come for AM Resource Limit and AM Used Resources (also user based metric). I will see how better I can group these by using existing DAO object classed. Also will look in renaming existing class as mentioned by Naga for better readability in o/p.

          4) When label is configured, max-am-percent and max-am-resource should in the above table of the RM UI (above table is for partition-specific properties and below table is for other general propoerties).
          >> Yes. This will be fine, i will make necessary changes for same.

          Show
          sunilg Sunil G added a comment - Thank You Wangda Tan and Naga for the comments. 1) I found you have RM node REST response in the zip, did you change any RMNodes related info? 5) Is there any change to cluster metrics web UI? >> Yes. Available resource were not calculated correctly when labels were used and this was displayed in cluster metrics. I changed that code also CSQueueUtils , hence attached UI snapshot and REST o/p nodes to show the updated Available resources metric. 2) Instead of putting max am to a separated object, I would prefer to put them to existing resourceUsageByPartition instead of introducing a new object. Even though max-am-limit is not usage, but it describes upper bound of usage. Thoughts? 3) And similarly, am-limit should put to queue capacities as well, now max_am_perc is a part of QueueCapacities. >> I agree that we could use this way, as its also an upper bound usage. So a grouping will come for AM Resource Limit and AM Used Resources (also user based metric). I will see how better I can group these by using existing DAO object classed. Also will look in renaming existing class as mentioned by Naga for better readability in o/p. 4) When label is configured, max-am-percent and max-am-resource should in the above table of the RM UI (above table is for partition-specific properties and below table is for other general propoerties). >> Yes. This will be fine, i will make necessary changes for same.
          Hide
          sunilg Sunil G added a comment -

          Attaching an updated version of patch addressing the comments. Also attached screen shots and REST o/ps

          Wangda Tan please help to review the same.

          Show
          sunilg Sunil G added a comment - Attaching an updated version of patch addressing the comments. Also attached screen shots and REST o/ps Wangda Tan please help to review the same.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 7m 55s trunk passed
          +1 compile 0m 29s trunk passed with JDK v1.8.0_66
          +1 compile 0m 32s trunk passed with JDK v1.7.0_85
          +1 checkstyle 0m 12s trunk passed
          +1 mvnsite 0m 39s trunk passed
          +1 mvneclipse 0m 15s trunk passed
          +1 findbugs 1m 14s trunk passed
          +1 javadoc 0m 23s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 28s trunk passed with JDK v1.7.0_85
          +1 mvninstall 0m 36s the patch passed
          +1 compile 0m 29s the patch passed with JDK v1.8.0_66
          +1 javac 0m 29s the patch passed
          +1 compile 0m 33s the patch passed with JDK v1.7.0_85
          +1 javac 0m 33s the patch passed
          -1 checkstyle 0m 14s Patch generated 20 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 210, now 219).
          +1 mvnsite 0m 39s the patch passed
          +1 mvneclipse 0m 17s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          -1 findbugs 1m 31s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager introduced 4 new FindBugs issues.
          +1 javadoc 0m 26s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 31s the patch passed with JDK v1.7.0_85
          -1 unit 65m 36s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 65m 4s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_85.
          +1 asflicense 0m 22s Patch does not generate ASF License warnings.
          149m 40s



          Reason Tests
          FindBugs module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
            Dead store to a in org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(ResponseInfo, String) At CapacitySchedulerPage.java:org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(ResponseInfo, String) At CapacitySchedulerPage.java:[line 141]
            Dead store to b in org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(ResponseInfo, String) At CapacitySchedulerPage.java:org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(ResponseInfo, String) At CapacitySchedulerPage.java:[line 142]
            Dead store to c in org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(ResponseInfo, String) At CapacitySchedulerPage.java:org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(ResponseInfo, String) At CapacitySchedulerPage.java:[line 143]
            Dead store to t in org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(ResponseInfo, String) At CapacitySchedulerPage.java:org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(ResponseInfo, String) At CapacitySchedulerPage.java:[line 139]
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions
            hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
          JDK v1.7.0_85 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions
            hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched
            hadoop.yarn.server.resourcemanager.TestAMAuthorization



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12775287/0004-YARN-4304.patch
          JIRA Issue YARN-4304
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux b9b9eee872d0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 53e3bf7
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9836/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          findbugs https://builds.apache.org/job/PreCommit-YARN-Build/9836/artifact/patchprocess/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9836/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9836/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9836/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9836/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt
          JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9836/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 76MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9836/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 7m 55s trunk passed +1 compile 0m 29s trunk passed with JDK v1.8.0_66 +1 compile 0m 32s trunk passed with JDK v1.7.0_85 +1 checkstyle 0m 12s trunk passed +1 mvnsite 0m 39s trunk passed +1 mvneclipse 0m 15s trunk passed +1 findbugs 1m 14s trunk passed +1 javadoc 0m 23s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 28s trunk passed with JDK v1.7.0_85 +1 mvninstall 0m 36s the patch passed +1 compile 0m 29s the patch passed with JDK v1.8.0_66 +1 javac 0m 29s the patch passed +1 compile 0m 33s the patch passed with JDK v1.7.0_85 +1 javac 0m 33s the patch passed -1 checkstyle 0m 14s Patch generated 20 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 210, now 219). +1 mvnsite 0m 39s the patch passed +1 mvneclipse 0m 17s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. -1 findbugs 1m 31s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager introduced 4 new FindBugs issues. +1 javadoc 0m 26s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 31s the patch passed with JDK v1.7.0_85 -1 unit 65m 36s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 65m 4s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_85. +1 asflicense 0m 22s Patch does not generate ASF License warnings. 149m 40s Reason Tests FindBugs module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager   Dead store to a in org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(ResponseInfo, String) At CapacitySchedulerPage.java:org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(ResponseInfo, String) At CapacitySchedulerPage.java: [line 141]   Dead store to b in org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(ResponseInfo, String) At CapacitySchedulerPage.java:org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(ResponseInfo, String) At CapacitySchedulerPage.java: [line 142]   Dead store to c in org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(ResponseInfo, String) At CapacitySchedulerPage.java:org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(ResponseInfo, String) At CapacitySchedulerPage.java: [line 143]   Dead store to t in org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(ResponseInfo, String) At CapacitySchedulerPage.java:org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(ResponseInfo, String) At CapacitySchedulerPage.java: [line 139] JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_85 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12775287/0004-YARN-4304.patch JIRA Issue YARN-4304 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux b9b9eee872d0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 53e3bf7 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9836/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt findbugs https://builds.apache.org/job/PreCommit-YARN-Build/9836/artifact/patchprocess/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html unit https://builds.apache.org/job/PreCommit-YARN-Build/9836/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9836/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9836/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9836/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9836/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 76MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9836/console This message was automatically generated.
          Hide
          sunilg Sunil G added a comment -

          Test case failures and findbugs are related.
          I will address these in next patch.

          Show
          sunilg Sunil G added a comment - Test case failures and findbugs are related. I will address these in next patch.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Sunil G,
          Took a look at REST API implementation of latest patch, some comments:

          By design, PartitionResourcesInfo/ResourcesInfo should be used by user/queue/app, so we need to make fields and usage generic to these components.

          • amResourceLimit is meaningful to all components. App doesn't use that field for now, but we can keep it and set it to infinite.
          • userAMResourceLimit is not meaningful to queue/app, and it's overlap to user.resourcesInfo.amResourceLimit, I suggest to remove it and we can use amResourceLimit of first user of queues to show on UI. Another reason is in the future we could have different users has different amResourceLimits.

          And also ResourcesInfo is RESTful mapping to ResourceUsage, so necessary changes need to be added to ResourceUsage (Maybe rename ResourceUsage to ResourcesInformation?) as well. Renaming could be done in a separated JIRA, but I suggest to change ResourceUsage implementation at the same JIRA.

          If you agree with above, ResourcesInfo's constructor shouldn't relate to LeafQueue and considerAMUsage, it should simply copy fields from ResourceUsage.

          Show
          leftnoteasy Wangda Tan added a comment - Hi Sunil G , Took a look at REST API implementation of latest patch, some comments: By design, PartitionResourcesInfo/ResourcesInfo should be used by user/queue/app, so we need to make fields and usage generic to these components. amResourceLimit is meaningful to all components. App doesn't use that field for now, but we can keep it and set it to infinite. userAMResourceLimit is not meaningful to queue/app, and it's overlap to user.resourcesInfo.amResourceLimit, I suggest to remove it and we can use amResourceLimit of first user of queues to show on UI. Another reason is in the future we could have different users has different amResourceLimits. And also ResourcesInfo is RESTful mapping to ResourceUsage, so necessary changes need to be added to ResourceUsage (Maybe rename ResourceUsage to ResourcesInformation?) as well. Renaming could be done in a separated JIRA, but I suggest to change ResourceUsage implementation at the same JIRA. If you agree with above, ResourcesInfo's constructor shouldn't relate to LeafQueue and considerAMUsage, it should simply copy fields from ResourceUsage.
          Hide
          sunilg Sunil G added a comment -

          Thanks Wangda Tan for the comments.
          Yes. It's fine and we can change those variables as per the comments.
          For ResourceUsage, since it's an existing code, we need to add these new items and make the renaming as suggested. Both these can be tracked in another ticket together. And later can be made used here. I ll create another ticket for that if you feel it's fine. Thank you.

          Show
          sunilg Sunil G added a comment - Thanks Wangda Tan for the comments. Yes. It's fine and we can change those variables as per the comments. For ResourceUsage , since it's an existing code, we need to add these new items and make the renaming as suggested. Both these can be tracked in another ticket together. And later can be made used here. I ll create another ticket for that if you feel it's fine. Thank you.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Sunil G,
          I'm fine with changing ResourceUsage in a separated JIRA (logic change only, not renaming), but I think it's better to finish ResourceUsage change before this patch. Which we can have a more clear view of this patch once ResourceUsage changes are completed.

          Show
          leftnoteasy Wangda Tan added a comment - Sunil G , I'm fine with changing ResourceUsage in a separated JIRA (logic change only, not renaming), but I think it's better to finish ResourceUsage change before this patch. Which we can have a more clear view of this patch once ResourceUsage changes are completed.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Sunil G,
          Tested the latest patch on the trunk and seems to work fine and not facing the Web ui rendering issue (NPE) which was coming in the initial patch. WRT implementation i feel Tan, Wangdas comment ResourcesInfo's constructor shouldn't relate to LeafQueue and considerAMUsage, it should simply copy fields from ResourceUsage. is valid and even if required may be we can extend the ResourceInfo for the LeafQueue and have specific fields there.

          Show
          Naganarasimha Naganarasimha G R added a comment - Sunil G , Tested the latest patch on the trunk and seems to work fine and not facing the Web ui rendering issue (NPE) which was coming in the initial patch. WRT implementation i feel Tan, Wangda s comment ResourcesInfo's constructor shouldn't relate to LeafQueue and considerAMUsage, it should simply copy fields from ResourceUsage. is valid and even if required may be we can extend the ResourceInfo for the LeafQueue and have specific fields there.
          Hide
          sunilg Sunil G added a comment -

          Thank you Naganarasimha G R for helping in verifying the patch. Yes, I will be handling as the suggestion from Wangda in another ticket and has provided patch there. Once that's resolved, we ll remove the leafqueue dependency here and only will be dependent on ResourceUsage as you suggested. Thank you.

          Show
          sunilg Sunil G added a comment - Thank you Naganarasimha G R for helping in verifying the patch. Yes, I will be handling as the suggestion from Wangda in another ticket and has provided patch there. Once that's resolved, we ll remove the leafqueue dependency here and only will be dependent on ResourceUsage as you suggested. Thank you.
          Hide
          sunilg Sunil G added a comment -

          Uploading a new version of patch as YARN-4418 is committed.
          Also addressed the comments given earlier.

          _("Max Application Master Resources Per User:",
                    resourceUsages.getAMResourceLimit().toString());
          

          For User AM Limit ,we do not have any place holder now except userInfo. Hence I have given AM resource limit here which is not very correct. I can invoke few apis here, and some how try to get the corresponding userInfo object, pls suggest if the same is needed.
          Wangda Tan, pls help to check the same

          Show
          sunilg Sunil G added a comment - Uploading a new version of patch as YARN-4418 is committed. Also addressed the comments given earlier. _("Max Application Master Resources Per User:", resourceUsages.getAMResourceLimit().toString()); For User AM Limit ,we do not have any place holder now except userInfo . Hence I have given AM resource limit here which is not very correct. I can invoke few apis here, and some how try to get the corresponding userInfo object, pls suggest if the same is needed. Wangda Tan , pls help to check the same
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 8m 17s trunk passed
          +1 compile 0m 29s trunk passed with JDK v1.8.0_66
          +1 compile 0m 33s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 14s trunk passed
          +1 mvnsite 0m 39s trunk passed
          +1 mvneclipse 0m 16s trunk passed
          +1 findbugs 1m 15s trunk passed
          +1 javadoc 0m 24s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 29s trunk passed with JDK v1.7.0_91
          +1 mvninstall 0m 34s the patch passed
          +1 compile 0m 32s the patch passed with JDK v1.8.0_66
          +1 javac 0m 32s the patch passed
          +1 compile 0m 32s the patch passed with JDK v1.7.0_91
          +1 javac 0m 32s the patch passed
          -1 checkstyle 0m 14s Patch generated 18 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 267, now 275).
          +1 mvnsite 0m 38s the patch passed
          +1 mvneclipse 0m 16s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 26s the patch passed
          +1 javadoc 0m 24s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 28s the patch passed with JDK v1.7.0_91
          -1 unit 60m 10s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 61m 13s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 27s Patch does not generate ASF License warnings.
          140m 37s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched
            hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions
            hadoop.yarn.server.resourcemanager.TestClientRMTokens
          JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched
            hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions
            hadoop.yarn.server.resourcemanager.TestClientRMTokens



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12778073/0005-YARN-4304.patch
          JIRA Issue YARN-4304
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 7c513432c712 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / c470c89
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10010/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/10010/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/10010/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10010/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10010/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10010/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 75MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/10010/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 8m 17s trunk passed +1 compile 0m 29s trunk passed with JDK v1.8.0_66 +1 compile 0m 33s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 14s trunk passed +1 mvnsite 0m 39s trunk passed +1 mvneclipse 0m 16s trunk passed +1 findbugs 1m 15s trunk passed +1 javadoc 0m 24s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 29s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 34s the patch passed +1 compile 0m 32s the patch passed with JDK v1.8.0_66 +1 javac 0m 32s the patch passed +1 compile 0m 32s the patch passed with JDK v1.7.0_91 +1 javac 0m 32s the patch passed -1 checkstyle 0m 14s Patch generated 18 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 267, now 275). +1 mvnsite 0m 38s the patch passed +1 mvneclipse 0m 16s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 26s the patch passed +1 javadoc 0m 24s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 28s the patch passed with JDK v1.7.0_91 -1 unit 60m 10s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 61m 13s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 27s Patch does not generate ASF License warnings. 140m 37s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions   hadoop.yarn.server.resourcemanager.TestClientRMTokens JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions   hadoop.yarn.server.resourcemanager.TestClientRMTokens Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12778073/0005-YARN-4304.patch JIRA Issue YARN-4304 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 7c513432c712 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / c470c89 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10010/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10010/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10010/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10010/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10010/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10010/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 75MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/10010/console This message was automatically generated.
          Hide
          sunilg Sunil G added a comment -

          Test case failures were related. Uploading a new patch addressing the same.

          Show
          sunilg Sunil G added a comment - Test case failures were related. Uploading a new patch addressing the same.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 8m 36s trunk passed
          +1 compile 0m 28s trunk passed with JDK v1.8.0_66
          +1 compile 0m 31s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 14s trunk passed
          +1 mvnsite 0m 38s trunk passed
          +1 mvneclipse 0m 16s trunk passed
          +1 findbugs 1m 19s trunk passed
          +1 javadoc 0m 23s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 29s trunk passed with JDK v1.7.0_91
          +1 mvninstall 0m 38s the patch passed
          +1 compile 0m 31s the patch passed with JDK v1.8.0_66
          +1 javac 0m 31s the patch passed
          +1 compile 0m 34s the patch passed with JDK v1.7.0_91
          +1 javac 0m 34s the patch passed
          -1 checkstyle 0m 14s Patch generated 18 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 266, now 274).
          +1 mvnsite 0m 41s the patch passed
          +1 mvneclipse 0m 16s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 26s the patch passed
          +1 javadoc 0m 28s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 30s the patch passed with JDK v1.7.0_91
          -1 unit 69m 1s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 66m 37s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 24s Patch does not generate ASF License warnings.
          155m 25s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
          JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA
            hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12778302/0005-YARN-4304.patch
          JIRA Issue YARN-4304
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 17371b2afbab 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / d85f729
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10026/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/10026/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/10026/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10026/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10026/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10026/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 76MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/10026/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 8m 36s trunk passed +1 compile 0m 28s trunk passed with JDK v1.8.0_66 +1 compile 0m 31s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 14s trunk passed +1 mvnsite 0m 38s trunk passed +1 mvneclipse 0m 16s trunk passed +1 findbugs 1m 19s trunk passed +1 javadoc 0m 23s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 29s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 38s the patch passed +1 compile 0m 31s the patch passed with JDK v1.8.0_66 +1 javac 0m 31s the patch passed +1 compile 0m 34s the patch passed with JDK v1.7.0_91 +1 javac 0m 34s the patch passed -1 checkstyle 0m 14s Patch generated 18 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 266, now 274). +1 mvnsite 0m 41s the patch passed +1 mvneclipse 0m 16s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 26s the patch passed +1 javadoc 0m 28s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 30s the patch passed with JDK v1.7.0_91 -1 unit 69m 1s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 66m 37s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 24s Patch does not generate ASF License warnings. 155m 25s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12778302/0005-YARN-4304.patch JIRA Issue YARN-4304 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 17371b2afbab 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / d85f729 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10026/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10026/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10026/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10026/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10026/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10026/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 76MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/10026/console This message was automatically generated.
          Hide
          sunilg Sunil G added a comment -

          Test case failures are not related. Except from known test fails, others passed locally. Wangda Tan could you please help to take a look on latest patch.
          As mentioned above.

          ("Max Application Master Resources Per User:",
                    resourceUsages.getAMResourceLimit().toString());
          

          I am giving AM Resource limit for user AM limit, This is not really correct. But to get this, we need to have some round about way. I gave this implementation based on below comment. Is this what you also expect?

          I suggest to remove it and we can use amResourceLimit of first user of queues to show on UI.

          Show
          sunilg Sunil G added a comment - Test case failures are not related. Except from known test fails, others passed locally. Wangda Tan could you please help to take a look on latest patch. As mentioned above. ("Max Application Master Resources Per User:", resourceUsages.getAMResourceLimit().toString()); I am giving AM Resource limit for user AM limit, This is not really correct. But to get this, we need to have some round about way. I gave this implementation based on below comment. Is this what you also expect? I suggest to remove it and we can use amResourceLimit of first user of queues to show on UI.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Sunil G,

          1) Changes for the max-available-to-a-queue may be separated to a separated patch. The major concern is

          • performance: For every allocated container, we need to iterate all labels to get a total resources.
          • I think the longer term fix should be, add a by-partition info to queue metrics, including max/guaranteed/available/used, etc. I can help to review proposal/patches.

          2) There're several methods are using:

            public synchronized Resource getAMResourceLimitPerPartition(
                String nodePartition)
          

          I think after we have YARN-4418, we don't need to calculate AMResourceLimitPerPartition everytime. So I will suggest to split the method to calculate-and-get and read-only methods. How about call them calculateAndGetAMResourceLimitPerPartition and getAMResourceLimitPerPartition? getPendingAppDiagnosticMessage/REST-API will use read-only interface.

          Agree?

          3) Could you upload screenshots/REST api responses?

          To your queston:

          I am giving AM Resource limit for user AM limit, This is not really correct. But to get this, we need to have some round about way. I gave this implementation based on below comment. Is this what you also expect?

          Is it possible to use org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.CapacitySchedulerLeafQueueInfo#users to get AM Resource Limit instead? Is there any concern of you for this approach? I think using same queue's limit for queue and user may not be correct to me.

          Show
          leftnoteasy Wangda Tan added a comment - Hi Sunil G , 1) Changes for the max-available-to-a-queue may be separated to a separated patch. The major concern is performance: For every allocated container, we need to iterate all labels to get a total resources. I think the longer term fix should be, add a by-partition info to queue metrics, including max/guaranteed/available/used, etc. I can help to review proposal/patches. 2) There're several methods are using: public synchronized Resource getAMResourceLimitPerPartition( String nodePartition) I think after we have YARN-4418 , we don't need to calculate AMResourceLimitPerPartition everytime. So I will suggest to split the method to calculate-and-get and read-only methods. How about call them calculateAndGetAMResourceLimitPerPartition and getAMResourceLimitPerPartition? getPendingAppDiagnosticMessage /REST-API will use read-only interface. Agree? 3) Could you upload screenshots/REST api responses? To your queston: I am giving AM Resource limit for user AM limit, This is not really correct. But to get this, we need to have some round about way. I gave this implementation based on below comment. Is this what you also expect? Is it possible to use org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.CapacitySchedulerLeafQueueInfo#users to get AM Resource Limit instead? Is there any concern of you for this approach? I think using same queue's limit for queue and user may not be correct to me.
          Hide
          sunilg Sunil G added a comment -

          I think the longer term fix should be, add a by-partition info to queue metrics, including max/guaranteed/available/used, etc. I can help to review proposal/patches.

          Yes. This looks fine for me. I will track this with a different JIRA. This new ticket will track the cluster metrics total memory when used with labels.

          How about call them calculateAndGetAMResourceLimitPerPartition and getAMResourceLimitPerPartition?

          +1. Since we have this calculated information, we can make use of same. I will make the changes.

          I will upload REST/UI screen shots along with updated patch

          Is there any concern of you for this approach?

          I was slightly confused by earlier comment. Ideally we can make use of users, but we may need to get the first user and get his AM Limit. This is perfectly fine for now until we have per-user-am-limit.

          Show
          sunilg Sunil G added a comment - I think the longer term fix should be, add a by-partition info to queue metrics, including max/guaranteed/available/used, etc. I can help to review proposal/patches. Yes. This looks fine for me. I will track this with a different JIRA. This new ticket will track the cluster metrics total memory when used with labels. How about call them calculateAndGetAMResourceLimitPerPartition and getAMResourceLimitPerPartition? +1. Since we have this calculated information, we can make use of same. I will make the changes. I will upload REST/UI screen shots along with updated patch Is there any concern of you for this approach? I was slightly confused by earlier comment. Ideally we can make use of users , but we may need to get the first user and get his AM Limit. This is perfectly fine for now until we have per-user-am-limit.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Sunil G,

          I was slightly confused by earlier comment. Ideally we can make use of users, but we may need to get the first user and get his AM Limit. This is perfectly fine for now until we have per-user-am-limit.

          Agree! And I think if there's no users in the queue, we can use queue's am-limit directly (instead of "N/A" or 0, etc.).

          Show
          leftnoteasy Wangda Tan added a comment - Sunil G , I was slightly confused by earlier comment. Ideally we can make use of users, but we may need to get the first user and get his AM Limit. This is perfectly fine for now until we have per-user-am-limit. Agree! And I think if there's no users in the queue, we can use queue's am-limit directly (instead of "N/A" or 0, etc.).
          Hide
          sunilg Sunil G added a comment -

          Attaching new patch and screen shots.

          Show
          sunilg Sunil G added a comment - Attaching new patch and screen shots.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Thanks for update Sunil G,

          Some minor comments:
          1) CapacitySchedulerPage:

          • You can fetch lqinfo.getUsers().getUsersList() only once.
          • (resourceUsages.getAmUsed() == null) ? "N/A", is it better to use Resource.None() instead of N/A?

          2) LeafQueue:

          • I'm not sure if this is required:
              public synchronized Resource getAMResourceLimit() {
                // Ensure we calculate limit when its not pre-computed
                if (queueUsage.getAMLimit().equals(Resources.none())) {
            

            Since calculateAndGetAMResourceLimit is called by activateApplications, and activateApplications is called by updateClusterResource. It will be updated when cluster resource changed or queue configuration changed (initialized).
            I think the getAMResourceLimit should safely return queueUsage.getAMLimit directly.

          • getAMResourceLimit doesn't need synchronized lock
          • getUserAMResourceLimit is used by Tests and REST API only. I think REST API can use Resource from UsersInfo and AMResourceLimit, no need to access queue's synchronized lock. And I think you can move following code to CapacitySchedulerLeafQueueInfo:
            140	      // Get UserInfo from first user to calculate AM Resource Limit per user.
            141	      ResourceInfo userAMResourceLimit = null;
            142	      if (lqinfo.getUsers().getUsersList().isEmpty()) {
            143	        // If no users are present, consider AM Limit for that queue.
            144	        userAMResourceLimit = resourceUsages.getAMResourceLimit();
            145	      } else {
            146	        userAMResourceLimit = lqinfo.getUsers().getUsersList().get(0)
            147	            .getResourceUsageInfo().getPartitionResourceUsageInfo(label)
            148	            .getAMResourceLimit();
            149	      }
            
          Show
          leftnoteasy Wangda Tan added a comment - Thanks for update Sunil G , Some minor comments: 1) CapacitySchedulerPage: You can fetch lqinfo.getUsers().getUsersList() only once. (resourceUsages.getAmUsed() == null) ? "N/A" , is it better to use Resource.None() instead of N/A? 2) LeafQueue: I'm not sure if this is required: public synchronized Resource getAMResourceLimit() { // Ensure we calculate limit when its not pre-computed if (queueUsage.getAMLimit().equals(Resources.none())) { Since calculateAndGetAMResourceLimit is called by activateApplications, and activateApplications is called by updateClusterResource. It will be updated when cluster resource changed or queue configuration changed (initialized). I think the getAMResourceLimit should safely return queueUsage.getAMLimit directly. getAMResourceLimit doesn't need synchronized lock getUserAMResourceLimit is used by Tests and REST API only. I think REST API can use Resource from UsersInfo and AMResourceLimit, no need to access queue's synchronized lock. And I think you can move following code to CapacitySchedulerLeafQueueInfo: 140 // Get UserInfo from first user to calculate AM Resource Limit per user. 141 ResourceInfo userAMResourceLimit = null ; 142 if (lqinfo.getUsers().getUsersList().isEmpty()) { 143 // If no users are present, consider AM Limit for that queue. 144 userAMResourceLimit = resourceUsages.getAMResourceLimit(); 145 } else { 146 userAMResourceLimit = lqinfo.getUsers().getUsersList().get(0) 147 .getResourceUsageInfo().getPartitionResourceUsageInfo(label) 148 .getAMResourceLimit(); 149 }
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 8m 10s trunk passed
          +1 compile 0m 30s trunk passed with JDK v1.8.0_66
          +1 compile 0m 32s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 13s trunk passed
          +1 mvnsite 0m 39s trunk passed
          +1 mvneclipse 0m 15s trunk passed
          +1 findbugs 1m 16s trunk passed
          +1 javadoc 0m 24s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 28s trunk passed with JDK v1.7.0_91
          +1 mvninstall 0m 37s the patch passed
          +1 compile 0m 29s the patch passed with JDK v1.8.0_66
          +1 javac 0m 29s the patch passed
          +1 compile 0m 33s the patch passed with JDK v1.7.0_91
          +1 javac 0m 33s the patch passed
          -1 checkstyle 0m 14s Patch generated 19 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 175, now 186).
          +1 mvnsite 0m 38s the patch passed
          +1 mvneclipse 0m 16s the patch passed
          -1 whitespace 0m 0s The patch has 2 line(s) with tabs.
          +1 findbugs 1m 25s the patch passed
          +1 javadoc 0m 23s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 29s the patch passed with JDK v1.7.0_91
          -1 unit 65m 43s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 66m 42s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 23s Patch does not generate ASF License warnings.
          151m 23s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler
          JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12778878/0006-YARN-4304.patch
          JIRA Issue YARN-4304
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 5c81f7f23fb4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 2cb5aff
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10058/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/10058/artifact/patchprocess/whitespace-tabs.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/10058/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/10058/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10058/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10058/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10058/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 76MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/10058/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 8m 10s trunk passed +1 compile 0m 30s trunk passed with JDK v1.8.0_66 +1 compile 0m 32s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 13s trunk passed +1 mvnsite 0m 39s trunk passed +1 mvneclipse 0m 15s trunk passed +1 findbugs 1m 16s trunk passed +1 javadoc 0m 24s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 28s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 37s the patch passed +1 compile 0m 29s the patch passed with JDK v1.8.0_66 +1 javac 0m 29s the patch passed +1 compile 0m 33s the patch passed with JDK v1.7.0_91 +1 javac 0m 33s the patch passed -1 checkstyle 0m 14s Patch generated 19 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 175, now 186). +1 mvnsite 0m 38s the patch passed +1 mvneclipse 0m 16s the patch passed -1 whitespace 0m 0s The patch has 2 line(s) with tabs. +1 findbugs 1m 25s the patch passed +1 javadoc 0m 23s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 29s the patch passed with JDK v1.7.0_91 -1 unit 65m 43s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 66m 42s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 23s Patch does not generate ASF License warnings. 151m 23s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12778878/0006-YARN-4304.patch JIRA Issue YARN-4304 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 5c81f7f23fb4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 2cb5aff findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10058/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/10058/artifact/patchprocess/whitespace-tabs.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10058/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10058/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10058/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10058/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10058/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 76MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/10058/console This message was automatically generated.
          Hide
          sunilg Sunil G added a comment -

          Hi Wangda Tan
          Thank you for sharing the comments. Uploading a new patch addressing the comments.

          And I think you can move following code to CapacitySchedulerLeafQueueInfo:

          I think if we move to CapacitySchedulerLeafQueueInfo, we may need to have this calculated for all accessible labels in the queue. So it may need to have a new list in lqInfo. In this case, is it fine to to keep in CapacitySchedulerPage itself. Thoughts?

          As discussed offline, we will pre-compute AMLimit for all labels in queue prior to the loop in activateApplications

          Show
          sunilg Sunil G added a comment - Hi Wangda Tan Thank you for sharing the comments. Uploading a new patch addressing the comments. And I think you can move following code to CapacitySchedulerLeafQueueInfo: I think if we move to CapacitySchedulerLeafQueueInfo , we may need to have this calculated for all accessible labels in the queue. So it may need to have a new list in lqInfo. In this case, is it fine to to keep in CapacitySchedulerPage itself. Thoughts? As discussed offline, we will pre-compute AMLimit for all labels in queue prior to the loop in activateApplications
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 7m 27s trunk passed
          +1 compile 0m 27s trunk passed with JDK v1.8.0_66
          +1 compile 0m 30s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 13s trunk passed
          +1 mvnsite 0m 36s trunk passed
          +1 mvneclipse 0m 14s trunk passed
          +1 findbugs 1m 14s trunk passed
          +1 javadoc 0m 22s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 26s trunk passed with JDK v1.7.0_91
          +1 mvninstall 0m 34s the patch passed
          +1 compile 0m 26s the patch passed with JDK v1.8.0_66
          +1 javac 0m 26s the patch passed
          +1 compile 0m 30s the patch passed with JDK v1.7.0_91
          +1 javac 0m 30s the patch passed
          -1 checkstyle 0m 13s Patch generated 20 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 252, now 264).
          +1 mvnsite 0m 37s the patch passed
          +1 mvneclipse 0m 14s the patch passed
          -1 whitespace 0m 0s The patch has 4 line(s) with tabs.
          +1 findbugs 1m 18s the patch passed
          +1 javadoc 0m 22s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 26s the patch passed with JDK v1.7.0_91
          -1 unit 59m 5s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 60m 23s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 23s Patch does not generate ASF License warnings.
          137m 6s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.TestClientRMTokens
          JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.TestClientRMTokens



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12779183/0007-YARN-4304.patch
          JIRA Issue YARN-4304
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 8efe3bb474d4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / df83230
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10079/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/10079/artifact/patchprocess/whitespace-tabs.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/10079/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/10079/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10079/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10079/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10079/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 75MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/10079/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 7m 27s trunk passed +1 compile 0m 27s trunk passed with JDK v1.8.0_66 +1 compile 0m 30s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 13s trunk passed +1 mvnsite 0m 36s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 1m 14s trunk passed +1 javadoc 0m 22s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 26s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 34s the patch passed +1 compile 0m 26s the patch passed with JDK v1.8.0_66 +1 javac 0m 26s the patch passed +1 compile 0m 30s the patch passed with JDK v1.7.0_91 +1 javac 0m 30s the patch passed -1 checkstyle 0m 13s Patch generated 20 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 252, now 264). +1 mvnsite 0m 37s the patch passed +1 mvneclipse 0m 14s the patch passed -1 whitespace 0m 0s The patch has 4 line(s) with tabs. +1 findbugs 1m 18s the patch passed +1 javadoc 0m 22s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 26s the patch passed with JDK v1.7.0_91 -1 unit 59m 5s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 60m 23s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 23s Patch does not generate ASF License warnings. 137m 6s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.TestClientRMTokens JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.TestClientRMTokens Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12779183/0007-YARN-4304.patch JIRA Issue YARN-4304 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 8efe3bb474d4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / df83230 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10079/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/10079/artifact/patchprocess/whitespace-tabs.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10079/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10079/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10079/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10079/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10079/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 75MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/10079/console This message was automatically generated.
          Hide
          leftnoteasy Wangda Tan added a comment -

          1) LeafQueue:

          • amLimitForEmptyLabel -> amLimit?
          • Since calculateAndGetAMResourceLimitPerPartition set amLimit per partition, do you still need the map: amPartitionLimit?
          • Could you update user am-limit (in User.resourceUsage) as well when computing user am-limit at activateApplications? Ideally it's better to directly return user am-limit when call getUserAMResourceLimitPerPartition/getUserAMResourceLimit

          2) CapacitySchedulerPage

          • Instead of getAMResourceLimit()
                  if (usersList.isEmpty()) {
                    // If no users are present, consider AM Limit for that queue.
                    userAMResourceLimit = resourceUsages.getAMResourceLimit();
                  }
            

            Shouldn't you use getAMResourceLimit(partition)?

          Show
          leftnoteasy Wangda Tan added a comment - 1) LeafQueue: amLimitForEmptyLabel -> amLimit? Since calculateAndGetAMResourceLimitPerPartition set amLimit per partition, do you still need the map: amPartitionLimit? Could you update user am-limit (in User.resourceUsage) as well when computing user am-limit at activateApplications? Ideally it's better to directly return user am-limit when call getUserAMResourceLimitPerPartition/getUserAMResourceLimit 2) CapacitySchedulerPage Instead of getAMResourceLimit() if (usersList.isEmpty()) { // If no users are present, consider AM Limit for that queue. userAMResourceLimit = resourceUsages.getAMResourceLimit(); } Shouldn't you use getAMResourceLimit(partition)?
          Hide
          sunilg Sunil G added a comment -

          Hi Wangda Tan

          Thank you very much for sharing the comments. I have few doubts in same.

          Could you update user am-limit (in User.resourceUsage) as well when computing user am-limit at activateApplications?

          We already doing this in activateApplications as below.

          user.getResourceUsage().incAMUsed(partitionName,
                    application.getAMResource(partitionName));
          user.getResourceUsage().setAMLimit(partitionName, userAMLimit);
          

          Now if we need to do this in getUserAMResourceLimitPerPartition, we need to pass the username as well. This user name comes from application

          // Check user am resource limit
                User user = getUser(application.getUser());
          

          Hence,
          1. We cannot pre-compute user am-limit like we have done for am-limit before pendingOrderingPolicy loop in activateApplications
          2. So as we compute this limit every time, we store in user.getResourceUsage(). I thought of resusing this. However there can be cases where 0 user/1 users/Multiple users for one queue. So getting correct user is not really predictable for all getters (now we do not supply any user name in getAMUserLimit/Partition, even though we take first user, there can be cases where 0 users).

          Thoughts?

          CapacitySchedulerPage, Instead of getAMResourceLimit() , Shouldn't you use getAMResourceLimit(partition)?

                PartitionResourcesInfo resourceUsages =
                    lqinfo.getResources().getPartitionResourceUsageInfo(label);
          
                // Get UserInfo from first user to calculate AM Resource Limit per user.
                ResourceInfo userAMResourceLimit = null;
                ArrayList<UserInfo> usersList = lqinfo.getUsers().getUsersList();
                if (usersList.isEmpty()) {
                  // If no users are present, consider AM Limit for that queue.
                  userAMResourceLimit = resourceUsages.getAMResourceLimit();
                }
          ....
          

          Here resourceUsages is already taken for specific label. Hence i think we do not need per label am-limit.

          Show
          sunilg Sunil G added a comment - Hi Wangda Tan Thank you very much for sharing the comments. I have few doubts in same. Could you update user am-limit (in User.resourceUsage) as well when computing user am-limit at activateApplications? We already doing this in activateApplications as below. user.getResourceUsage().incAMUsed(partitionName, application.getAMResource(partitionName)); user.getResourceUsage().setAMLimit(partitionName, userAMLimit); Now if we need to do this in getUserAMResourceLimitPerPartition , we need to pass the username as well. This user name comes from application // Check user am resource limit User user = getUser(application.getUser()); Hence, 1. We cannot pre-compute user am-limit like we have done for am-limit before pendingOrderingPolicy loop in activateApplications 2. So as we compute this limit every time, we store in user.getResourceUsage() . I thought of resusing this. However there can be cases where 0 user/1 users/Multiple users for one queue. So getting correct user is not really predictable for all getters (now we do not supply any user name in getAMUserLimit/Partition, even though we take first user, there can be cases where 0 users). Thoughts? CapacitySchedulerPage, Instead of getAMResourceLimit() , Shouldn't you use getAMResourceLimit(partition)? PartitionResourcesInfo resourceUsages = lqinfo.getResources().getPartitionResourceUsageInfo(label); // Get UserInfo from first user to calculate AM Resource Limit per user. ResourceInfo userAMResourceLimit = null ; ArrayList<UserInfo> usersList = lqinfo.getUsers().getUsersList(); if (usersList.isEmpty()) { // If no users are present, consider AM Limit for that queue. userAMResourceLimit = resourceUsages.getAMResourceLimit(); } .... Here resourceUsages is already taken for specific label. Hence i think we do not need per label am-limit.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Sunil G,

          Thanks for replying, make sense to me. Is there any update for the REST response? Could you please upload new REST/screenshot if there's any changes of them?

          Show
          leftnoteasy Wangda Tan added a comment - Hi Sunil G , Thanks for replying, make sense to me. Is there any update for the REST response? Could you please upload new REST/screenshot if there's any changes of them?
          Hide
          sunilg Sunil G added a comment -

          Thank you Wangda Tan for the comments.

          Addressing the first comment in new patch and also attaching screen shots. Kindly help to check the same.

          Show
          sunilg Sunil G added a comment - Thank you Wangda Tan for the comments. Addressing the first comment in new patch and also attaching screen shots. Kindly help to check the same.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 7m 56s trunk passed
          +1 compile 0m 28s trunk passed with JDK v1.8.0_66
          +1 compile 0m 30s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 18s trunk passed
          +1 mvnsite 0m 39s trunk passed
          +1 mvneclipse 0m 16s trunk passed
          +1 findbugs 1m 15s trunk passed
          +1 javadoc 0m 23s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 31s trunk passed with JDK v1.7.0_91
          +1 mvninstall 0m 33s the patch passed
          +1 compile 0m 24s the patch passed with JDK v1.8.0_66
          +1 javac 0m 24s the patch passed
          +1 compile 0m 28s the patch passed with JDK v1.7.0_91
          +1 javac 0m 28s the patch passed
          -1 checkstyle 0m 18s Patch generated 24 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 251, now 263).
          +1 mvnsite 0m 36s the patch passed
          +1 mvneclipse 0m 13s the patch passed
          -1 whitespace 0m 0s The patch has 4 line(s) with tabs.
          +1 findbugs 1m 16s the patch passed
          +1 javadoc 0m 20s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 26s the patch passed with JDK v1.7.0_91
          -1 unit 63m 20s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 64m 52s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 19s Patch does not generate ASF License warnings.
          146m 48s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
          JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12780083/0008-YARN-4304.patch
          JIRA Issue YARN-4304
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 63178ec55456 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 4e4b3a8
          Default Java 1.7.0_91
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10131/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/10131/artifact/patchprocess/whitespace-tabs.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/10131/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/10131/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10131/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10131/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10131/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 76MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/10131/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 7m 56s trunk passed +1 compile 0m 28s trunk passed with JDK v1.8.0_66 +1 compile 0m 30s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 18s trunk passed +1 mvnsite 0m 39s trunk passed +1 mvneclipse 0m 16s trunk passed +1 findbugs 1m 15s trunk passed +1 javadoc 0m 23s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 31s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 33s the patch passed +1 compile 0m 24s the patch passed with JDK v1.8.0_66 +1 javac 0m 24s the patch passed +1 compile 0m 28s the patch passed with JDK v1.7.0_91 +1 javac 0m 28s the patch passed -1 checkstyle 0m 18s Patch generated 24 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 251, now 263). +1 mvnsite 0m 36s the patch passed +1 mvneclipse 0m 13s the patch passed -1 whitespace 0m 0s The patch has 4 line(s) with tabs. +1 findbugs 1m 16s the patch passed +1 javadoc 0m 20s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 26s the patch passed with JDK v1.7.0_91 -1 unit 63m 20s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 64m 52s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 19s Patch does not generate ASF License warnings. 146m 48s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12780083/0008-YARN-4304.patch JIRA Issue YARN-4304 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 63178ec55456 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 4e4b3a8 Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10131/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/10131/artifact/patchprocess/whitespace-tabs.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10131/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10131/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10131/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10131/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10131/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 76MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/10131/console This message was automatically generated.
          Hide
          sunilg Sunil G added a comment -

          Test case failures are not-related.

          Show
          sunilg Sunil G added a comment - Test case failures are not-related.
          Hide
          leftnoteasy Wangda Tan added a comment -

          1) REST response: amResourceLimit -> amLimit (it's a part of resources, so don't need to mention it's a resource)
          2) REST response: for parent queue, queueCapacitiesByPartition contains maxAMLimitPercentage and resourceUsagesByPartition contains amResourceLimit. I would suggest to add a flag to include am-resource-related fields by queueCapacitiesByPartition/resourceUsagesByPartition only when a queue is leaf queue.
          3) I'm not sure if it's possible that user's am limit could be greater than queue's am limit, we should cap user's am limit by queue's am limit.

          Show
          leftnoteasy Wangda Tan added a comment - 1) REST response: amResourceLimit -> amLimit (it's a part of resources, so don't need to mention it's a resource) 2) REST response: for parent queue, queueCapacitiesByPartition contains maxAMLimitPercentage and resourceUsagesByPartition contains amResourceLimit. I would suggest to add a flag to include am-resource-related fields by queueCapacitiesByPartition/resourceUsagesByPartition only when a queue is leaf queue. 3) I'm not sure if it's possible that user's am limit could be greater than queue's am limit, we should cap user's am limit by queue's am limit.
          Hide
          sunilg Sunil G added a comment -

          Uploading screen shots and REST o/p as per latest change

          Show
          sunilg Sunil G added a comment - Uploading screen shots and REST o/p as per latest change
          Hide
          sunilg Sunil G added a comment -

          Thank You Wangda Tan
          Uploading a new patch addressing the comments.

          Also attached the REST o/p with labels after making the changes based on review comments. Kindly help to check the same.

          Show
          sunilg Sunil G added a comment - Thank You Wangda Tan Uploading a new patch addressing the comments. Also attached the REST o/p with labels after making the changes based on review comments. Kindly help to check the same.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Sunil,

          Thanks for update, I can still see "maxAMLimitPercentage" in parent queue, could you double check it?

          Show
          leftnoteasy Wangda Tan added a comment - Hi Sunil, Thanks for update, I can still see "maxAMLimitPercentage" in parent queue, could you double check it?
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 8m 0s trunk passed
          +1 compile 0m 31s trunk passed with JDK v1.8.0_66
          +1 compile 0m 31s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 18s trunk passed
          +1 mvnsite 0m 38s trunk passed
          +1 mvneclipse 0m 15s trunk passed
          +1 findbugs 1m 16s trunk passed
          +1 javadoc 0m 24s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 29s trunk passed with JDK v1.7.0_91
          +1 mvninstall 0m 33s the patch passed
          +1 compile 0m 28s the patch passed with JDK v1.8.0_66
          +1 javac 0m 28s the patch passed
          +1 compile 0m 30s the patch passed with JDK v1.7.0_91
          +1 javac 0m 30s the patch passed
          -1 checkstyle 0m 17s Patch generated 22 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 251, now 261).
          +1 mvnsite 0m 37s the patch passed
          +1 mvneclipse 0m 13s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 21s the patch passed
          +1 javadoc 0m 22s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 25s the patch passed with JDK v1.7.0_91
          -1 unit 60m 35s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 62m 27s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 18s Patch does not generate ASF License warnings.
          141m 37s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions
            hadoop.yarn.server.resourcemanager.TestClientRMTokens
          JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions
            hadoop.yarn.server.resourcemanager.TestClientRMTokens



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12780592/0009-YARN-4304.patch
          JIRA Issue YARN-4304
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 05932223ab3d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 96d8f1d
          Default Java 1.7.0_91
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10158/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/10158/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/10158/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10158/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10158/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10158/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 76MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/10158/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 8m 0s trunk passed +1 compile 0m 31s trunk passed with JDK v1.8.0_66 +1 compile 0m 31s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 18s trunk passed +1 mvnsite 0m 38s trunk passed +1 mvneclipse 0m 15s trunk passed +1 findbugs 1m 16s trunk passed +1 javadoc 0m 24s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 29s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 33s the patch passed +1 compile 0m 28s the patch passed with JDK v1.8.0_66 +1 javac 0m 28s the patch passed +1 compile 0m 30s the patch passed with JDK v1.7.0_91 +1 javac 0m 30s the patch passed -1 checkstyle 0m 17s Patch generated 22 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 251, now 261). +1 mvnsite 0m 37s the patch passed +1 mvneclipse 0m 13s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 21s the patch passed +1 javadoc 0m 22s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 25s the patch passed with JDK v1.7.0_91 -1 unit 60m 35s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 62m 27s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 18s Patch does not generate ASF License warnings. 141m 37s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions   hadoop.yarn.server.resourcemanager.TestClientRMTokens JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions   hadoop.yarn.server.resourcemanager.TestClientRMTokens Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12780592/0009-YARN-4304.patch JIRA Issue YARN-4304 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 05932223ab3d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 96d8f1d Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10158/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10158/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10158/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10158/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10158/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10158/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 76MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/10158/console This message was automatically generated.
          Hide
          sunilg Sunil G added a comment -

          Hi Wangda Tan
          Thannk you for pointing out. As I see it, maxAMLimitPercentage is a primitive type (float).

          So I think setting null wont help there. I will give some more try with other options and will update,.

          Show
          sunilg Sunil G added a comment - Hi Wangda Tan Thannk you for pointing out. As I see it, maxAMLimitPercentage is a primitive type (float). So I think setting null wont help there. I will give some more try with other options and will update,.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Sunil,

          Sorry for my late response, I thought I replied this comment.

          Is using non-primitive type such as Float able to solve the problem?

          Show
          leftnoteasy Wangda Tan added a comment - Hi Sunil, Sorry for my late response, I thought I replied this comment. Is using non-primitive type such as Float able to solve the problem?
          Hide
          sunilg Sunil G added a comment -

          Yes Wangda Tan
          I tried this also earlier, and I was getting same problem. Mostly I may need to wrap it in another DAO object, but it will result in a new object. Thoughts?

          Show
          sunilg Sunil G added a comment - Yes Wangda Tan I tried this also earlier, and I was getting same problem. Mostly I may need to wrap it in another DAO object, but it will result in a new object. Thoughts?
          Hide
          sunilg Sunil G added a comment -

          Hi Wangda Tan
          I think I also needed to pass maxAMLimitPercentage as Float. Now I could hide the same for parent queue. Also attached the REST o/p. Kindly help to review.

          Show
          sunilg Sunil G added a comment - Hi Wangda Tan I think I also needed to pass maxAMLimitPercentage as Float . Now I could hide the same for parent queue. Also attached the REST o/p. Kindly help to review.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 8m 43s trunk passed
          +1 compile 0m 35s trunk passed with JDK v1.8.0_66
          +1 compile 0m 34s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 18s trunk passed
          +1 mvnsite 0m 42s trunk passed
          +1 mvneclipse 0m 16s trunk passed
          +1 findbugs 1m 24s trunk passed
          -1 javadoc 0m 26s hadoop-yarn-server-resourcemanager in trunk failed with JDK v1.8.0_66.
          +1 javadoc 0m 31s trunk passed with JDK v1.7.0_91
          +1 mvninstall 0m 36s the patch passed
          +1 compile 0m 31s the patch passed with JDK v1.8.0_66
          +1 javac 0m 31s the patch passed
          +1 compile 0m 32s the patch passed with JDK v1.7.0_91
          +1 javac 0m 32s the patch passed
          -1 checkstyle 0m 19s Patch generated 22 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 251, now 261).
          +1 mvnsite 0m 39s the patch passed
          +1 mvneclipse 0m 13s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 27s the patch passed
          -1 javadoc 0m 22s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          +1 javadoc 0m 27s the patch passed with JDK v1.7.0_91
          -1 unit 61m 6s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 62m 5s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 19s Patch does not generate ASF License warnings.
          143m 15s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions
            hadoop.yarn.server.resourcemanager.TestClientRMTokens
          JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions
            hadoop.yarn.server.resourcemanager.TestClientRMTokens



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12781443/0010-YARN-4304.patch
          JIRA Issue YARN-4304
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 9b0ea3697d45 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 0e76f1f
          Default Java 1.7.0_91
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
          findbugs v3.0.0
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/10222/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10222/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/10222/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/10222/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/10222/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10222/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10222/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10222/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 76MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/10222/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 8m 43s trunk passed +1 compile 0m 35s trunk passed with JDK v1.8.0_66 +1 compile 0m 34s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 18s trunk passed +1 mvnsite 0m 42s trunk passed +1 mvneclipse 0m 16s trunk passed +1 findbugs 1m 24s trunk passed -1 javadoc 0m 26s hadoop-yarn-server-resourcemanager in trunk failed with JDK v1.8.0_66. +1 javadoc 0m 31s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 36s the patch passed +1 compile 0m 31s the patch passed with JDK v1.8.0_66 +1 javac 0m 31s the patch passed +1 compile 0m 32s the patch passed with JDK v1.7.0_91 +1 javac 0m 32s the patch passed -1 checkstyle 0m 19s Patch generated 22 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 251, now 261). +1 mvnsite 0m 39s the patch passed +1 mvneclipse 0m 13s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 27s the patch passed -1 javadoc 0m 22s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. +1 javadoc 0m 27s the patch passed with JDK v1.7.0_91 -1 unit 61m 6s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 62m 5s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 19s Patch does not generate ASF License warnings. 143m 15s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions   hadoop.yarn.server.resourcemanager.TestClientRMTokens JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions   hadoop.yarn.server.resourcemanager.TestClientRMTokens Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12781443/0010-YARN-4304.patch JIRA Issue YARN-4304 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 9b0ea3697d45 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 0e76f1f Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 javadoc https://builds.apache.org/job/PreCommit-YARN-Build/10222/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10222/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/10222/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10222/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10222/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10222/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10222/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10222/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 76MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/10222/console This message was automatically generated.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Thanks Sunil G,

          The only comment from my side is, in CapacitySchedulerInfo, it should disable putting amlimit to QueueCapacitiesInfo.

          Not related to your patch:
          Now CapacitySchedulerInfo is duplicated to CapacitySchedulerQueueInfo, but it missed a lot of fields such as resource usage (for root queue), etc.
          I think this could be resolved in a separated JIRA.

          Thoughts?

          Show
          leftnoteasy Wangda Tan added a comment - Thanks Sunil G , The only comment from my side is, in CapacitySchedulerInfo, it should disable putting amlimit to QueueCapacitiesInfo. Not related to your patch: Now CapacitySchedulerInfo is duplicated to CapacitySchedulerQueueInfo, but it missed a lot of fields such as resource usage (for root queue), etc. I think this could be resolved in a separated JIRA. Thoughts?
          Hide
          sunilg Sunil G added a comment -

          Yes, I will make the change.

          I will raise a ticket to track the same.

          Show
          sunilg Sunil G added a comment - Yes, I will make the change. I will raise a ticket to track the same.
          Hide
          sunilg Sunil G added a comment -

          Updating patch as per the comments. Wangda Tan kindly help to check the same.

          Show
          sunilg Sunil G added a comment - Updating patch as per the comments. Wangda Tan kindly help to check the same.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 7m 26s trunk passed
          +1 compile 0m 28s trunk passed with JDK v1.8.0_66
          +1 compile 0m 30s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 18s trunk passed
          +1 mvnsite 0m 34s trunk passed
          +1 mvneclipse 0m 15s trunk passed
          +1 findbugs 1m 11s trunk passed
          +1 javadoc 0m 21s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 26s trunk passed with JDK v1.7.0_91
          +1 mvninstall 0m 30s the patch passed
          +1 compile 0m 25s the patch passed with JDK v1.8.0_66
          +1 javac 0m 25s the patch passed
          +1 compile 0m 28s the patch passed with JDK v1.7.0_91
          +1 javac 0m 28s the patch passed
          -1 checkstyle 0m 16s Patch generated 22 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 261, now 271).
          +1 mvnsite 0m 34s the patch passed
          +1 mvneclipse 0m 12s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 15s the patch passed
          +1 javadoc 0m 19s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 24s the patch passed with JDK v1.7.0_91
          -1 unit 59m 50s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 61m 3s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 17s Patch does not generate ASF License warnings.
          138m 6s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
          JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions
            hadoop.yarn.server.resourcemanager.TestAMAuthorization



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12781872/0011-YARN-4304.patch
          JIRA Issue YARN-4304
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux f6c211018814 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 25051c3
          Default Java 1.7.0_91
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10246/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/10246/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/10246/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10246/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10246/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10246/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 75MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/10246/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 7m 26s trunk passed +1 compile 0m 28s trunk passed with JDK v1.8.0_66 +1 compile 0m 30s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 18s trunk passed +1 mvnsite 0m 34s trunk passed +1 mvneclipse 0m 15s trunk passed +1 findbugs 1m 11s trunk passed +1 javadoc 0m 21s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 26s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 30s the patch passed +1 compile 0m 25s the patch passed with JDK v1.8.0_66 +1 javac 0m 25s the patch passed +1 compile 0m 28s the patch passed with JDK v1.7.0_91 +1 javac 0m 28s the patch passed -1 checkstyle 0m 16s Patch generated 22 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 261, now 271). +1 mvnsite 0m 34s the patch passed +1 mvneclipse 0m 12s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 15s the patch passed +1 javadoc 0m 19s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 24s the patch passed with JDK v1.7.0_91 -1 unit 59m 50s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 61m 3s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 17s Patch does not generate ASF License warnings. 138m 6s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12781872/0011-YARN-4304.patch JIRA Issue YARN-4304 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux f6c211018814 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 25051c3 Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10246/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10246/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10246/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10246/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10246/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10246/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 75MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/10246/console This message was automatically generated.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Patch looks good, +1, will wait a couple of days to see if anybody else wants to check the patch.

          Thanks,

          Show
          leftnoteasy Wangda Tan added a comment - Patch looks good, +1, will wait a couple of days to see if anybody else wants to check the patch. Thanks,
          Hide
          leftnoteasy Wangda Tan added a comment -

          Attached patch fixed test failure.

          Show
          leftnoteasy Wangda Tan added a comment - Attached patch fixed test failure.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          -1 patch 0m 4s YARN-4304 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help.



          Subsystem Report/Notes
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12782800/0011-YARN-4304.modified.patch
          JIRA Issue YARN-4304
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/10315/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment -1 patch 0m 4s YARN-4304 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12782800/0011-YARN-4304.modified.patch JIRA Issue YARN-4304 Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/10315/console This message was automatically generated.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #9129 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9129/)
          YARN-4304. AM max resource configuration per partition to be (wangda: rev b08ecf5c7589b055e93b2907413213f36097724d)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/PartitionQueueCapacitiesInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/PartitionResourcesInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ResourceUsageInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ResourcesInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/QueueCapacitiesInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesForCSWithPartitions.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/PartitionResourceUsageInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/UserInfo.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerInfo.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9129 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9129/ ) YARN-4304 . AM max resource configuration per partition to be (wangda: rev b08ecf5c7589b055e93b2907413213f36097724d) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/PartitionQueueCapacitiesInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/PartitionResourcesInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ResourceUsageInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ResourcesInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/QueueCapacitiesInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesForCSWithPartitions.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/PartitionResourceUsageInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/UserInfo.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerInfo.java
          Hide
          leftnoteasy Wangda Tan added a comment -

          Above failure caused by Jenkins runs build after patch committed. I've run tests before push.

          Show
          leftnoteasy Wangda Tan added a comment - Above failure caused by Jenkins runs build after patch committed. I've run tests before push.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Committed to trunk/branch-2/branch-2.8, thanks Sunil G and thanks Naganarasimha G R/Bibin A Chundatt for reviewing!

          Show
          leftnoteasy Wangda Tan added a comment - Committed to trunk/branch-2/branch-2.8, thanks Sunil G and thanks Naganarasimha G R / Bibin A Chundatt for reviewing!
          Hide
          sunilg Sunil G added a comment -

          Thank you very much Wangda. I also ran test locally and passed the test case. Some how missed earlier inside other known issues..

          Show
          sunilg Sunil G added a comment - Thank you very much Wangda. I also ran test locally and passed the test case. Some how missed earlier inside other known issues..
          Hide
          leftnoteasy Wangda Tan added a comment -

          Sunil G, np, thanks

          Show
          leftnoteasy Wangda Tan added a comment - Sunil G , np, thanks

            People

            • Assignee:
              sunilg Sunil G
              Reporter:
              sunilg Sunil G
            • Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development