Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-5437 [Umbrella] Add more debug/diagnostic messages to scheduler
  3. YARN-3946

Update exact reason as to why a submitted app is in ACCEPTED state to app's diagnostic message

    Details

    • Hadoop Flags:
      Reviewed

      Description

      Currently there is no direct way to get the exact reason as to why a submitted app is still in ACCEPTED state. It should be possible to know through RM REST API as to what aspect is not being met - say, queue limits being reached, or core/ memory requirement not being met, or AM limit being reached, etc.

      1. 3946WebImages.zip
        483 kB
        Naganarasimha G R
      2. YARN-3946.v1.001.patch
        18 kB
        Naganarasimha G R
      3. YARN-3946.v1.002.patch
        50 kB
        Naganarasimha G R
      4. YARN-3946.v1.003.Images.zip
        750 kB
        Naganarasimha G R
      5. YARN-3946.v1.003.patch
        51 kB
        Naganarasimha G R
      6. YARN-3946.v1.004.patch
        54 kB
        Naganarasimha G R
      7. YARN-3946.v1.005.patch
        56 kB
        Naganarasimha G R
      8. YARN-3946.v1.006.patch
        57 kB
        Naganarasimha G R
      9. YARN-3946.v1.007.patch
        59 kB
        Naganarasimha G R
      10. YARN-3946.v1.008.patch
        59 kB
        Naganarasimha G R

        Issue Links

          Activity

          Hide
          varun_saxena Varun Saxena added a comment -

          Sumit Nigam, thanks for reporting the issue.
          There have been changes recently in RM Scheduler Page(primarily for Capacity Scheduler) in 2.7.0 to enable better debugging of such situations.
          It gives an internal view of whats happening in scheduler.

          If a submitted app is not moving from ACCEPTED state to RUNNING state because AM cannot be launched for it due to queue limits, we can debug the cause from UI by checking queue information.

          For instance, if AM cannot be launched, we can check "Max Application Master Resources Per User" and "Max Application Master Resources" to ascertain if enough resources are available to launch AM.

          Would information shown in 2.7.0 Scheduler page be enough to debug the cause ?

          Show
          varun_saxena Varun Saxena added a comment - Sumit Nigam , thanks for reporting the issue. There have been changes recently in RM Scheduler Page(primarily for Capacity Scheduler) in 2.7.0 to enable better debugging of such situations. It gives an internal view of whats happening in scheduler. If a submitted app is not moving from ACCEPTED state to RUNNING state because AM cannot be launched for it due to queue limits, we can debug the cause from UI by checking queue information. For instance, if AM cannot be launched, we can check "Max Application Master Resources Per User" and "Max Application Master Resources" to ascertain if enough resources are available to launch AM. Would information shown in 2.7.0 Scheduler page be enough to debug the cause ?
          Hide
          varun_saxena Varun Saxena added a comment -

          I hope the intention is primarily debugging and not some other use case, where a REST API maybe more suitable.

          Show
          varun_saxena Varun Saxena added a comment - I hope the intention is primarily debugging and not some other use case, where a REST API maybe more suitable.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Sumit Nigam & Varun Saxena, Yes there are enough information in the UI but the user needs to know all the intricacies of the scheduler to understand it, in most of the cases user might not have the understanding to relate the content present in WEB UI, so simple diagnostic message in WEB UI's applications page/CLI/REST will be faster to analyze it and further contact with the admin. Thoughts ?

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Sumit Nigam & Varun Saxena , Yes there are enough information in the UI but the user needs to know all the intricacies of the scheduler to understand it, in most of the cases user might not have the understanding to relate the content present in WEB UI, so simple diagnostic message in WEB UI's applications page/CLI/REST will be faster to analyze it and further contact with the admin. Thoughts ?
          Hide
          rohithsharma Rohith Sharma K S added a comment -

          Currently, Admin has to monitor the scheduler UI and look into the statistics about AM usage per queue and per user in user usage table(only in CS, below each leaf queue) about what is gone exceed/wrong in the cluster. But identifying exact reason for ACCEPTED state out of many possible reasons would be a good improvement.

          Show
          rohithsharma Rohith Sharma K S added a comment - Currently, Admin has to monitor the scheduler UI and look into the statistics about AM usage per queue and per user in user usage table(only in CS, below each leaf queue) about what is gone exceed/wrong in the cluster. But identifying exact reason for ACCEPTED state out of many possible reasons would be a good improvement.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          @Sumit Nigam, if you are not working on this, i would be interested to take this up .

          Show
          Naganarasimha Naganarasimha G R added a comment - @Sumit Nigam, if you are not working on this, i would be interested to take this up .
          Hide
          sumit.nigam Sumit Nigam added a comment -

          Hi Varun Saxena -
          Yes, the idea is not to only debug the issue (which you rightly mentioned, Admin can). I am currently on 2.6.0 and will try 2.7.0 when I can, for sure.

          There are too many reasons to be able to correlate as to what may have happened - AM level, resource level, queue level, possibly a combination of these, etc. A programmatic API is also useful to apply corrective measures - say, I can program to submit my app to a whole new queue altogether, etc. after I notice it is queue level capacity issue or try reserving container, etc - all programatically!

          Another important use case is that of attempting to submit the app (say, through own AM) and after a period of remaining in ACCEPTED state, reporting back automatically as to why the state remains so. A REST API is extremely useful in such a case. With this, it would be possible to to even ascertain when a job moves to ACCEPTED state from RUNNING state itself (RM restart, AM crash + restart). Again, this currently requires looking through logs / UI to ascertain what happened. In esp big clusters, this is indeed non-trivial.

          I'd agree with Nagannarasimha that we should be able to know that without administrative understanding of the same. Plus, I am not working on this.

          Show
          sumit.nigam Sumit Nigam added a comment - Hi Varun Saxena - Yes, the idea is not to only debug the issue (which you rightly mentioned, Admin can). I am currently on 2.6.0 and will try 2.7.0 when I can, for sure. There are too many reasons to be able to correlate as to what may have happened - AM level, resource level, queue level, possibly a combination of these, etc. A programmatic API is also useful to apply corrective measures - say, I can program to submit my app to a whole new queue altogether, etc. after I notice it is queue level capacity issue or try reserving container, etc - all programatically! Another important use case is that of attempting to submit the app (say, through own AM) and after a period of remaining in ACCEPTED state, reporting back automatically as to why the state remains so. A REST API is extremely useful in such a case. With this, it would be possible to to even ascertain when a job moves to ACCEPTED state from RUNNING state itself (RM restart, AM crash + restart). Again, this currently requires looking through logs / UI to ascertain what happened. In esp big clusters, this is indeed non-trivial. I'd agree with Nagannarasimha that we should be able to know that without administrative understanding of the same. Plus, I am not working on this.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Tan, Wangda,Rohith Sharma K S,Sunil G, Sumit Nigam & nijel.

          As mentioned by Wangda in his comment in YARN-4091, its very difficult to capture to the status when App's leafqueue or parent queue beyond its limit as it would not be good to loop through all the apps in the hierarchy and update the status for each node update and also it will loose its imp info from previous updates.

          So i think valid cases where we can update AMLaunchDiagnostics in SchedulerApplicationAttempt as (ForCS) :

          • App is in Pending state, AMLimit/userlimit of the queue
          • App waiting for resources of partition for AM to be launched (once moved from pending state)
          • App waiting for resources of partition for AM to be launched Some nodes are blacklisted (if it fails to launch because of some black list nodes)
          • AMLimit of the queue doesnt allow to launch
          • UserLimit of the queue doesnt allow to launch

          Please check if the approach is proper, if its usefull and required then can get similar thing done for FairScheduler also. cc/ Karthik Kambatla

          Also have taken the liberty to modify some small issues in SchedulerApplicationAttempt.isWaitingForAMContainer in the same patch if required can raise another jira and put these small changes there.

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda , Rohith Sharma K S , Sunil G , Sumit Nigam & nijel . As mentioned by Wangda in his comment in YARN-4091 , its very difficult to capture to the status when App's leafqueue or parent queue beyond its limit as it would not be good to loop through all the apps in the hierarchy and update the status for each node update and also it will loose its imp info from previous updates. So i think valid cases where we can update AMLaunchDiagnostics in SchedulerApplicationAttempt as (ForCS) : App is in Pending state, AMLimit/userlimit of the queue App waiting for resources of partition for AM to be launched (once moved from pending state) App waiting for resources of partition for AM to be launched Some nodes are blacklisted (if it fails to launch because of some black list nodes) AMLimit of the queue doesnt allow to launch UserLimit of the queue doesnt allow to launch Please check if the approach is proper, if its usefull and required then can get similar thing done for FairScheduler also. cc/ Karthik Kambatla Also have taken the liberty to modify some small issues in SchedulerApplicationAttempt.isWaitingForAMContainer in the same patch if required can raise another jira and put these small changes there.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Hi Naganarasimha G R,
          Thanks for working on this, general idea of this approach looks good, few suggestions about what to show:

          • AM launch diagnostics should have an intial value after added to scheduler:
            For unmanaged AM, it should be "User launched the Application Master since it's unmanaged"
            For managed AM, it should be "Added to scheduler, waiting to be scheduled" with some general suggestions about configurations to look at, such as user-limit, am-percent, queue-limit, etc.
          • Loop all applications when queue exceeds limit is too costly. I'd prefer to do nothing when this happens.
          • After application moved to activated state, if the application is traversed by scheduler but cannot allocate any resource, you may put something like "Trying to allocate to AM on node=x, etc.". After YARN-4091 we should be able to get more detailed information about why this happened.
          • Not caused by your patch, isWaitingForAMContainer checks if master container created, you may also need to check if application is in recover state or not. Because AM could contact to RM before AM container recovered by RM.
          • Similar to above, you may need to put diagnostic message when AM is recovering by RM
          • After AM launched, diag could be something like "AM is launched", which will be better than empty text.

          Regarding to implementation:

          • Since RMAppAttempt and SchedulerApplicationAttempt has 1 to 1 relationship, we can save a reference to RMAppAttemt in SchedulerApplicationAttempt, which could avoid getting it from RMContext.getRMApps()...
          • Since String is immutable, amLaunchDiagnostics could be violate so we don't need acquire locks.
          • Suggest to add to REST API / web UI together with this patch if changes are not complex.
          Show
          leftnoteasy Wangda Tan added a comment - Hi Naganarasimha G R , Thanks for working on this, general idea of this approach looks good, few suggestions about what to show: AM launch diagnostics should have an intial value after added to scheduler: For unmanaged AM, it should be "User launched the Application Master since it's unmanaged" For managed AM, it should be "Added to scheduler, waiting to be scheduled" with some general suggestions about configurations to look at, such as user-limit, am-percent, queue-limit, etc. Loop all applications when queue exceeds limit is too costly. I'd prefer to do nothing when this happens. After application moved to activated state, if the application is traversed by scheduler but cannot allocate any resource, you may put something like "Trying to allocate to AM on node=x, etc.". After YARN-4091 we should be able to get more detailed information about why this happened. Not caused by your patch, isWaitingForAMContainer checks if master container created, you may also need to check if application is in recover state or not. Because AM could contact to RM before AM container recovered by RM. Similar to above, you may need to put diagnostic message when AM is recovering by RM After AM launched, diag could be something like "AM is launched", which will be better than empty text. Regarding to implementation: Since RMAppAttempt and SchedulerApplicationAttempt has 1 to 1 relationship, we can save a reference to RMAppAttemt in SchedulerApplicationAttempt, which could avoid getting it from RMContext.getRMApps()... Since String is immutable, amLaunchDiagnostics could be violate so we don't need acquire locks. Suggest to add to REST API / web UI together with this patch if changes are not complex.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 8s docker + precommit patch detected.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 3m 21s trunk passed
          +1 compile 0m 26s trunk passed with JDK v1.8.0_66
          +1 compile 0m 26s trunk passed with JDK v1.7.0_79
          +1 checkstyle 0m 13s trunk passed
          +1 mvneclipse 0m 16s trunk passed
          +1 findbugs 1m 16s trunk passed
          +1 javadoc 0m 25s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 28s trunk passed with JDK v1.7.0_79
          +1 mvninstall 0m 29s the patch passed
          +1 compile 0m 24s the patch passed with JDK v1.8.0_66
          +1 javac 0m 24s the patch passed
          +1 compile 0m 27s the patch passed with JDK v1.7.0_79
          +1 javac 0m 27s the patch passed
          -1 checkstyle 0m 14s Patch generated 10 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 524, now 531).
          +1 mvneclipse 0m 16s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 25s the patch passed
          +1 javadoc 0m 25s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 35s the patch passed with JDK v1.7.0_79
          -1 unit 80m 57s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 85m 39s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_79.
          +1 asflicense 0m 38s Patch does not generate ASF License warnings.
          179m 36s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.reservation.TestCapacitySchedulerPlanFollower
            hadoop.yarn.server.resourcemanager.scheduler.capacity.TestReservations
            hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimits
          JDK v1.8.0_66 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue
          JDK v1.7.0_79 Failed junit tests hadoop.yarn.server.resourcemanager.reservation.TestCapacitySchedulerPlanFollower
            hadoop.yarn.server.resourcemanager.scheduler.capacity.TestReservations
            hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimits
          JDK v1.7.0_79 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue



          Subsystem Report/Notes
          Docker Client=1.7.0 Server=1.7.0 Image:test-patch-base-hadoop-date2015-11-04
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12770438/YARN-3946.v1.001.patch
          JIRA Issue YARN-3946
          Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile
          uname Linux 08fe60c05f4d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/apache-yetus-1a9afee/precommit/personality/hadoop.sh
          git revision trunk / dac0463
          Default Java 1.7.0_79
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9626/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9626/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9626/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_79.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9626/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9626/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_79.txt
          JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9626/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 226MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9626/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 8s docker + precommit patch detected. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 3m 21s trunk passed +1 compile 0m 26s trunk passed with JDK v1.8.0_66 +1 compile 0m 26s trunk passed with JDK v1.7.0_79 +1 checkstyle 0m 13s trunk passed +1 mvneclipse 0m 16s trunk passed +1 findbugs 1m 16s trunk passed +1 javadoc 0m 25s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 28s trunk passed with JDK v1.7.0_79 +1 mvninstall 0m 29s the patch passed +1 compile 0m 24s the patch passed with JDK v1.8.0_66 +1 javac 0m 24s the patch passed +1 compile 0m 27s the patch passed with JDK v1.7.0_79 +1 javac 0m 27s the patch passed -1 checkstyle 0m 14s Patch generated 10 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 524, now 531). +1 mvneclipse 0m 16s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 25s the patch passed +1 javadoc 0m 25s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 35s the patch passed with JDK v1.7.0_79 -1 unit 80m 57s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 85m 39s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_79. +1 asflicense 0m 38s Patch does not generate ASF License warnings. 179m 36s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.reservation.TestCapacitySchedulerPlanFollower   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestReservations   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimits JDK v1.8.0_66 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue JDK v1.7.0_79 Failed junit tests hadoop.yarn.server.resourcemanager.reservation.TestCapacitySchedulerPlanFollower   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestReservations   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimits JDK v1.7.0_79 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue Subsystem Report/Notes Docker Client=1.7.0 Server=1.7.0 Image:test-patch-base-hadoop-date2015-11-04 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12770438/YARN-3946.v1.001.patch JIRA Issue YARN-3946 Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile uname Linux 08fe60c05f4d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/apache-yetus-1a9afee/precommit/personality/hadoop.sh git revision trunk / dac0463 Default Java 1.7.0_79 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9626/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9626/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9626/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_79.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9626/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9626/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_79.txt JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9626/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 226MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9626/console This message was automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          I'd like to see this in application reports, so that client-side applications can display the details

          Show
          stevel@apache.org Steve Loughran added a comment - I'd like to see this in application reports, so that client-side applications can display the details
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks Steve Loughran, I am reworking on Tan, Wangda comments will consider this and upload a patch .

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks Steve Loughran , I am reworking on Tan, Wangda comments will consider this and upload a patch .
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks for the quick feedback Tan, Wangda

          AM launch diagnostics should have an intial value after added to scheduler: ...

          Initially thought of adding this message but the problem is LeafQueue.activateApplications will be immediately called by C.S in addApplicationAttempt hence the messages will be replaced very fast, hence initial message will not be helpfull but have ensured the related details are captured. Thoughts?

          Not caused by your patch, isWaitingForAMContainer checks if master container created, you may also need to check if application is in recover state or not. Because AM could contact to RM before AM container recovered by RM.

          I am not sure i got this correctly

          1. AM could contact to RM before AM container recovered by RM failed to understand the impact of this, all the required information is restored from the RMState store (RMAppAttemptImpl.recover(RMState) sets the mastercontainer from the store) , so after the services are started there is a possibility of AM hearbeat to be earlier than NM heartbeat, but what impact could it have? Correct me if my understanding is wrong !
          2. check if application is in recover state or not not sure how to do this if req!, i went through RMAppAttemptImpl and RMAppImpl there was no such methods or internal state which can expose this. May be i am missing something here.

          Suggest to add to REST API / web UI together with this patch if changes are not complex.

          Even earlier Implementation also had captured it as part of attempt.getDiagnostics, so it will be available in all the interfaces

          Other comments have handled, Have attached the web images

          Steve Loughran,

          I'd like to see this in application reports, so that client-side applications can display the details

          Have taken care in this patch

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks for the quick feedback Tan, Wangda AM launch diagnostics should have an intial value after added to scheduler: ... Initially thought of adding this message but the problem is LeafQueue.activateApplications will be immediately called by C.S in addApplicationAttempt hence the messages will be replaced very fast, hence initial message will not be helpfull but have ensured the related details are captured. Thoughts? Not caused by your patch, isWaitingForAMContainer checks if master container created, you may also need to check if application is in recover state or not. Because AM could contact to RM before AM container recovered by RM. I am not sure i got this correctly AM could contact to RM before AM container recovered by RM failed to understand the impact of this, all the required information is restored from the RMState store ( RMAppAttemptImpl.recover(RMState) sets the mastercontainer from the store) , so after the services are started there is a possibility of AM hearbeat to be earlier than NM heartbeat, but what impact could it have? Correct me if my understanding is wrong ! check if application is in recover state or not not sure how to do this if req!, i went through RMAppAttemptImpl and RMAppImpl there was no such methods or internal state which can expose this. May be i am missing something here. Suggest to add to REST API / web UI together with this patch if changes are not complex. Even earlier Implementation also had captured it as part of attempt.getDiagnostics, so it will be available in all the interfaces Other comments have handled, Have attached the web images Steve Loughran , I'd like to see this in application reports, so that client-side applications can display the details Have taken care in this patch
          Hide
          leftnoteasy Wangda Tan added a comment -

          1) Is it possible to merge amLaunchDiagnostics and other diagnostics? Which can simplify RMAppAttemptImpl implementation.
          2) Could you take a look at my previous comment?

          Since RMAppAttempt and SchedulerApplicationAttempt has 1 to 1 relationship, we can save a reference to RMAppAttemt in SchedulerApplicationAttempt, which could avoid getting it from RMContext.getRMApps()...

          3) I feel this may not needed (no code change needed for you latest patch)

          Since String is immutable, amLaunchDiagnostics could be violate so we don't need acquire locks.

          Since currently createApplicationAttemptReport has a big readLock, we don't need to spend extra time for the volatile.

          4) Suggestions about diagnostic message:

          • Have an internal field to record when is the latest update for the app. We can print it with diagnostic message to say, [23 sec before] <message>.
          • And we can use above field to prevent excessive updating of diagnostic message, currently it will be updated for every heartbeat for every accessed applications. I think we should limit frequency of updating to avoid overheads, hardcoding it to 1 sec seems fine to me for now, we can make it configurable if people starting complain it
          • Generally, I think the message format could be:
            Last update from scheduler: <time> (such as 23 sec before); <message> (such as "Application is activated, waiting for allocating AM container"); Details: (instead of GenericInfo) Partition=x, queue's absoluate capacity ... (and other fields in your patch)
          • After AM container is allocated and running, above message is still useful because people could understand if application is actively allocating resource or stay in the queue waiting to be accessed.
          Show
          leftnoteasy Wangda Tan added a comment - 1) Is it possible to merge amLaunchDiagnostics and other diagnostics? Which can simplify RMAppAttemptImpl implementation. 2) Could you take a look at my previous comment? Since RMAppAttempt and SchedulerApplicationAttempt has 1 to 1 relationship, we can save a reference to RMAppAttemt in SchedulerApplicationAttempt, which could avoid getting it from RMContext.getRMApps()... 3) I feel this may not needed (no code change needed for you latest patch) Since String is immutable, amLaunchDiagnostics could be violate so we don't need acquire locks. Since currently createApplicationAttemptReport has a big readLock, we don't need to spend extra time for the volatile. 4) Suggestions about diagnostic message: Have an internal field to record when is the latest update for the app. We can print it with diagnostic message to say, [23 sec before] <message> . And we can use above field to prevent excessive updating of diagnostic message, currently it will be updated for every heartbeat for every accessed applications. I think we should limit frequency of updating to avoid overheads, hardcoding it to 1 sec seems fine to me for now, we can make it configurable if people starting complain it Generally, I think the message format could be: Last update from scheduler: <time> (such as 23 sec before); <message> (such as "Application is activated, waiting for allocating AM container"); Details: (instead of GenericInfo) Partition=x, queue's absoluate capacity ... (and other fields in your patch) After AM container is allocated and running, above message is still useful because people could understand if application is actively allocating resource or stay in the queue waiting to be accessed.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Tan, Wangda,
          Sorry for the delay. As per the offline discussion we had concluded to

          1. We should only record AM launch related events with the patch, so we don't need to record recover/running state. (I think you can clean am-launch-diagnostic when AM container allocated).
          2. Event time is good, but I think we should put it in a separated JIRA. Maybe we need do some refactoring of existing diagnostic part.

          I have taken care about the first point and have AM launch diagnostic messages till container is assigned to the AM process. and for the second point as it was simple modification, i have captured it in this jira itself. Please check it .
          Also another difference from the previous patch, as i was earlier mentioning in some cases the reason why the node is not assigned was getting overwritten by the following modification in LeafQueue.

          @@ -904,7 +919,9 @@ public synchronized CSAssignment assignContainers(Resource clusterResource,
           
                   // Done
                   return assignment;
          -      } else if (!assignment.getSkipped()) {
          +      } else if (assignment.getSkipped()) {
          +        application.updateNodeDiagnostics(node);
          +      } else {
          

          hence have handled in this patch by storing this diagnostic message temporarily and clear it once message is created
          Also have pasted some images related to the patch.

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda , Sorry for the delay. As per the offline discussion we had concluded to We should only record AM launch related events with the patch, so we don't need to record recover/running state. (I think you can clean am-launch-diagnostic when AM container allocated). Event time is good, but I think we should put it in a separated JIRA. Maybe we need do some refactoring of existing diagnostic part. I have taken care about the first point and have AM launch diagnostic messages till container is assigned to the AM process. and for the second point as it was simple modification, i have captured it in this jira itself. Please check it . Also another difference from the previous patch, as i was earlier mentioning in some cases the reason why the node is not assigned was getting overwritten by the following modification in LeafQueue. @@ -904,7 +919,9 @@ public synchronized CSAssignment assignContainers(Resource clusterResource, // Done return assignment; - } else if (!assignment.getSkipped()) { + } else if (assignment.getSkipped()) { + application.updateNodeDiagnostics(node); + } else { hence have handled in this patch by storing this diagnostic message temporarily and clear it once message is created Also have pasted some images related to the patch.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
          +1 mvninstall 8m 59s trunk passed
          +1 compile 0m 35s trunk passed with JDK v1.8.0_66
          +1 compile 0m 37s trunk passed with JDK v1.7.0_85
          +1 checkstyle 0m 16s trunk passed
          +1 mvnsite 0m 44s trunk passed
          +1 mvneclipse 0m 17s trunk passed
          +1 findbugs 1m 25s trunk passed
          +1 javadoc 0m 28s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 31s trunk passed with JDK v1.7.0_85
          +1 mvninstall 0m 40s the patch passed
          +1 compile 0m 34s the patch passed with JDK v1.8.0_66
          +1 javac 0m 34s the patch passed
          +1 compile 0m 36s the patch passed with JDK v1.7.0_85
          +1 javac 0m 36s the patch passed
          -1 checkstyle 0m 14s Patch generated 15 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 655, now 666).
          +1 mvnsite 0m 42s the patch passed
          +1 mvneclipse 0m 16s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 33s the patch passed
          +1 javadoc 0m 28s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 30s the patch passed with JDK v1.7.0_85
          -1 unit 80m 55s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 81m 51s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_85.
          +1 asflicense 0m 25s Patch does not generate ASF License warnings.
          183m 52s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimitsByPartition
          JDK v1.8.0_66 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue
          JDK v1.7.0_85 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimitsByPartition
          JDK v1.7.0_85 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12773821/YARN-3946.v1.003.patch
          JIRA Issue YARN-3946
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 2312a2eb1749 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 201f14e
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9765/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9765/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9765/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9765/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9765/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt
          JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9765/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 75MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9765/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files. +1 mvninstall 8m 59s trunk passed +1 compile 0m 35s trunk passed with JDK v1.8.0_66 +1 compile 0m 37s trunk passed with JDK v1.7.0_85 +1 checkstyle 0m 16s trunk passed +1 mvnsite 0m 44s trunk passed +1 mvneclipse 0m 17s trunk passed +1 findbugs 1m 25s trunk passed +1 javadoc 0m 28s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 31s trunk passed with JDK v1.7.0_85 +1 mvninstall 0m 40s the patch passed +1 compile 0m 34s the patch passed with JDK v1.8.0_66 +1 javac 0m 34s the patch passed +1 compile 0m 36s the patch passed with JDK v1.7.0_85 +1 javac 0m 36s the patch passed -1 checkstyle 0m 14s Patch generated 15 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 655, now 666). +1 mvnsite 0m 42s the patch passed +1 mvneclipse 0m 16s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 33s the patch passed +1 javadoc 0m 28s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 30s the patch passed with JDK v1.7.0_85 -1 unit 80m 55s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 81m 51s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_85. +1 asflicense 0m 25s Patch does not generate ASF License warnings. 183m 52s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimitsByPartition JDK v1.8.0_66 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue JDK v1.7.0_85 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimitsByPartition JDK v1.7.0_85 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12773821/YARN-3946.v1.003.patch JIRA Issue YARN-3946 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 2312a2eb1749 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 201f14e findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9765/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9765/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9765/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9765/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9765/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9765/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 75MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9765/console This message was automatically generated.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Tan, Wangda
          TestAMAuthorization and TestClientRMTokens test cases are not related to this issue and already there are jiras addressing these testcase failures, but TestApplicationLimitsByPartition is related to the patch which i have corrected and also have covered one case when application is not assigned to a node, diagnostics shows the information of the node and the reason.

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda TestAMAuthorization and TestClientRMTokens test cases are not related to this issue and already there are jiras addressing these testcase failures, but TestApplicationLimitsByPartition is related to the patch which i have corrected and also have covered one case when application is not assigned to a node, diagnostics shows the information of the node and the reason.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 1s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 5 new or modified test files.
          +1 mvninstall 8m 50s trunk passed
          +1 compile 0m 36s trunk passed with JDK v1.8.0_66
          +1 compile 0m 35s trunk passed with JDK v1.7.0_85
          +1 checkstyle 0m 14s trunk passed
          +1 mvnsite 0m 42s trunk passed
          +1 mvneclipse 0m 17s trunk passed
          +1 findbugs 1m 23s trunk passed
          +1 javadoc 0m 27s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 30s trunk passed with JDK v1.7.0_85
          +1 mvninstall 0m 38s the patch passed
          +1 compile 0m 33s the patch passed with JDK v1.8.0_66
          +1 javac 0m 33s the patch passed
          +1 compile 0m 35s the patch passed with JDK v1.7.0_85
          +1 javac 0m 35s the patch passed
          -1 checkstyle 0m 15s Patch generated 15 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 653, now 664).
          +1 mvnsite 0m 41s the patch passed
          +1 mvneclipse 0m 17s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 32s the patch passed
          +1 javadoc 0m 27s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 30s the patch passed with JDK v1.7.0_85
          -1 unit 80m 25s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 82m 37s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_85.
          +1 asflicense 0m 26s Patch does not generate ASF License warnings.
          183m 52s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
          JDK v1.8.0_66 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue
          JDK v1.7.0_85 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler
          JDK v1.7.0_85 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12774044/YARN-3946.v1.004.patch
          JIRA Issue YARN-3946
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 19830e66528b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 28dfe72
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9780/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9780/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9780/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9780/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9780/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt
          JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9780/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 76MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9780/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 1s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 5 new or modified test files. +1 mvninstall 8m 50s trunk passed +1 compile 0m 36s trunk passed with JDK v1.8.0_66 +1 compile 0m 35s trunk passed with JDK v1.7.0_85 +1 checkstyle 0m 14s trunk passed +1 mvnsite 0m 42s trunk passed +1 mvneclipse 0m 17s trunk passed +1 findbugs 1m 23s trunk passed +1 javadoc 0m 27s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 30s trunk passed with JDK v1.7.0_85 +1 mvninstall 0m 38s the patch passed +1 compile 0m 33s the patch passed with JDK v1.8.0_66 +1 javac 0m 33s the patch passed +1 compile 0m 35s the patch passed with JDK v1.7.0_85 +1 javac 0m 35s the patch passed -1 checkstyle 0m 15s Patch generated 15 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 653, now 664). +1 mvnsite 0m 41s the patch passed +1 mvneclipse 0m 17s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 32s the patch passed +1 javadoc 0m 27s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 30s the patch passed with JDK v1.7.0_85 -1 unit 80m 25s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 82m 37s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_85. +1 asflicense 0m 26s Patch does not generate ASF License warnings. 183m 52s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.8.0_66 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue JDK v1.7.0_85 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler JDK v1.7.0_85 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12774044/YARN-3946.v1.004.patch JIRA Issue YARN-3946 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 19830e66528b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 28dfe72 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9780/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9780/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9780/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9780/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9780/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9780/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 76MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9780/console This message was automatically generated.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Tan, Wangda,
          Test case failures are not related to the jira and locally its passing with patch modifications and for some test case failures jira is also raised already.

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda , Test case failures are not related to the jira and locally its passing with patch modifications and for some test case failures jira is also raised already.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Naganarasimha G R, thanks for update, some comments:

          1) RMAppImpl:
          When app goes to final state (FINISHED/KILLEd, etc.), should we simply set AMLaunchDiagnostics to null?

          2) SchedulerApplicationAttempt:
          Why need two separate methods: updateDiagnosticsIfNotRunning/updateDiagnostics? They're a little confusing to me, I think AM launch diagnostics should be updated only if AM container is not running. If you think it's make sense to you, I suggest to rename/merge them to updateAMContainerDiagnostics.

          3) Do you think is it better to rename AMState.PENDING to inactivated? I think "PENDING" could mean "activated-but-not-activated" to end users (assume users don't have enough background knownledge about scheduler).

          4) Instead of setting AMLaunchDiagnostics to null when RMAppAttempt enters Scheduled state, do you think is it better to do that in RUNNING and FINAL_SAVING state? Unmanaged AM could skip the SCHEDULED state.

          5) It will be also very usaful if you can update AM launch diagnostics when RMAppAttempt go to LAUNCHED state, sometimes AM container allocated and sent to NM, but not sucessfully launched/registered to RM. Currently we don't know if this happens because YarnApplicationState doesn't have a "launched" state.

          Jian He, could you take a look at this patch as well?

          Show
          leftnoteasy Wangda Tan added a comment - Naganarasimha G R , thanks for update, some comments: 1) RMAppImpl: When app goes to final state (FINISHED/KILLEd, etc.), should we simply set AMLaunchDiagnostics to null? 2) SchedulerApplicationAttempt: Why need two separate methods: updateDiagnosticsIfNotRunning/updateDiagnostics? They're a little confusing to me, I think AM launch diagnostics should be updated only if AM container is not running. If you think it's make sense to you, I suggest to rename/merge them to updateAMContainerDiagnostics. 3) Do you think is it better to rename AMState.PENDING to inactivated? I think "PENDING" could mean "activated-but-not-activated" to end users (assume users don't have enough background knownledge about scheduler). 4) Instead of setting AMLaunchDiagnostics to null when RMAppAttempt enters Scheduled state, do you think is it better to do that in RUNNING and FINAL_SAVING state? Unmanaged AM could skip the SCHEDULED state. 5) It will be also very usaful if you can update AM launch diagnostics when RMAppAttempt go to LAUNCHED state, sometimes AM container allocated and sent to NM, but not sucessfully launched/registered to RM. Currently we don't know if this happens because YarnApplicationState doesn't have a "launched" state. Jian He , could you take a look at this patch as well?
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks for the comments Tan, Wangda,

          When app goes to final state (FINISHED/KILLEd, etc.), should we simply set AMLaunchDiagnostics to null?

          IIUC you are referring to RMAppAttemptImpl right ?, if so its mistake while correcting based on your previous comment missed to revert this part but anyway as per your 4th comment in cases of unmanaged AM i have updated it to null here.

          Why need two separate methods: updateDiagnosticsIfNotRunning/updateDiagnostics?

          May be the name needs to be proper but two methods are required as the status needs to be updated only if AM is not running for example its called in FiCaSchedulerApp.allocate, this method will be called whenever container is assiged for a app but we want to update the diagnostic only when the AM is not yet launched. and similarly used in LeafQueue.assignContainers. But in some cases we are sure that the AM is not yet launched hence to avoid unwanted verification (whether AM is running) we have updateDiagnostics. May be i can name them as checkAndUpdateAMContainerDiagnostics and updateAMContainerDiagnostics ?

          Do you think is it better to rename AMState.PENDING to inactivated?

          Yes, PENDING is not understandable to all hence the diagnostic message for PENDING is already set as "Application is added to the scheduler and is not yet activated." may be i can mention it as Application is added to the scheduler but is not yet scheduled. Thoughts?

          Instead of setting AMLaunchDiagnostics to null when RMAppAttempt enters Scheduled state,do you think is it better to do that in RUNNING and FINAL_SAVING state? Unmanaged AM could skip the SCHEDULED state.

          IMO i would prefer to set only for Unmanaged AMs in FINAL_SAVING state as already we are showing the YarnApplicationState as running and giving description abt it. so again if diagnostics is also showing that AM is launched and running then it can becomes repetitive in UI for normal (non unmanaged AM) apps.

          It will be also very usaful if you can update AM launch diagnostics when RMAppAttempt go to LAUNCHED state,

          Actually i wrongly considered AMContainerAllocatedTransition to reset the diag message, my intention was to reset only after its launched and registered. This would be very usefull for analyzing the state of AM. Have introduced LAUNCHED and setting after AMLauncher sends LAUNCHED event to RMAppAttempt.

          Tan, Wangda & Jian He
          Please review the latest patch,

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks for the comments Tan, Wangda , When app goes to final state (FINISHED/KILLEd, etc.), should we simply set AMLaunchDiagnostics to null? IIUC you are referring to RMAppAttemptImpl right ?, if so its mistake while correcting based on your previous comment missed to revert this part but anyway as per your 4th comment in cases of unmanaged AM i have updated it to null here. Why need two separate methods: updateDiagnosticsIfNotRunning/updateDiagnostics? May be the name needs to be proper but two methods are required as the status needs to be updated only if AM is not running for example its called in FiCaSchedulerApp.allocate, this method will be called whenever container is assiged for a app but we want to update the diagnostic only when the AM is not yet launched. and similarly used in LeafQueue.assignContainers. But in some cases we are sure that the AM is not yet launched hence to avoid unwanted verification (whether AM is running) we have updateDiagnostics. May be i can name them as checkAndUpdateAMContainerDiagnostics and updateAMContainerDiagnostics ? Do you think is it better to rename AMState.PENDING to inactivated? Yes, PENDING is not understandable to all hence the diagnostic message for PENDING is already set as "Application is added to the scheduler and is not yet activated." may be i can mention it as Application is added to the scheduler but is not yet scheduled. Thoughts? Instead of setting AMLaunchDiagnostics to null when RMAppAttempt enters Scheduled state,do you think is it better to do that in RUNNING and FINAL_SAVING state? Unmanaged AM could skip the SCHEDULED state. IMO i would prefer to set only for Unmanaged AMs in FINAL_SAVING state as already we are showing the YarnApplicationState as running and giving description abt it. so again if diagnostics is also showing that AM is launched and running then it can becomes repetitive in UI for normal (non unmanaged AM) apps. It will be also very usaful if you can update AM launch diagnostics when RMAppAttempt go to LAUNCHED state, Actually i wrongly considered AMContainerAllocatedTransition to reset the diag message, my intention was to reset only after its launched and registered. This would be very usefull for analyzing the state of AM. Have introduced LAUNCHED and setting after AMLauncher sends LAUNCHED event to RMAppAttempt. Tan, Wangda & Jian He Please review the latest patch,
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 5 new or modified test files.
          +1 mvninstall 9m 46s trunk passed
          +1 compile 0m 41s trunk passed with JDK v1.8.0_66
          +1 compile 0m 40s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 16s trunk passed
          +1 mvnsite 0m 48s trunk passed
          +1 mvneclipse 0m 19s trunk passed
          +1 findbugs 1m 33s trunk passed
          +1 javadoc 0m 32s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 34s trunk passed with JDK v1.7.0_91
          +1 mvninstall 0m 44s the patch passed
          +1 compile 0m 41s the patch passed with JDK v1.8.0_66
          +1 javac 0m 41s the patch passed
          +1 compile 0m 40s the patch passed with JDK v1.7.0_91
          +1 javac 0m 40s the patch passed
          -1 checkstyle 0m 16s Patch generated 21 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 655, now 672).
          +1 mvnsite 0m 46s the patch passed
          +1 mvneclipse 0m 18s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 44s the patch passed
          +1 javadoc 0m 33s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 34s the patch passed with JDK v1.7.0_91
          -1 unit 86m 2s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 88m 26s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 46s Patch does not generate ASF License warnings.
          216m 19s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps
            hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
          JDK v1.8.0_66 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue
          JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps
            hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
          JDK v1.7.0_91 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12775171/YARN-3946.v1.005.patch
          JIRA Issue YARN-3946
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux a49ca11576c0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 3c4a34e
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9832/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9832/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9832/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9832/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9832/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9832/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 76MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9832/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 5 new or modified test files. +1 mvninstall 9m 46s trunk passed +1 compile 0m 41s trunk passed with JDK v1.8.0_66 +1 compile 0m 40s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 16s trunk passed +1 mvnsite 0m 48s trunk passed +1 mvneclipse 0m 19s trunk passed +1 findbugs 1m 33s trunk passed +1 javadoc 0m 32s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 34s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 44s the patch passed +1 compile 0m 41s the patch passed with JDK v1.8.0_66 +1 javac 0m 41s the patch passed +1 compile 0m 40s the patch passed with JDK v1.7.0_91 +1 javac 0m 40s the patch passed -1 checkstyle 0m 16s Patch generated 21 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 655, now 672). +1 mvnsite 0m 46s the patch passed +1 mvneclipse 0m 18s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 44s the patch passed +1 javadoc 0m 33s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 34s the patch passed with JDK v1.7.0_91 -1 unit 86m 2s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 88m 26s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 46s Patch does not generate ASF License warnings. 216m 19s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.8.0_66 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_91 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12775171/YARN-3946.v1.005.patch JIRA Issue YARN-3946 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux a49ca11576c0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 3c4a34e findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9832/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9832/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9832/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9832/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9832/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9832/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 76MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9832/console This message was automatically generated.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Tan, Wangda,
          Some of the test failures seems to be related to the patch also would merge checkAndUpdateAMContainerDiagnostics and updateAMContainerDiagnostics with additional parameter rather than another method. Will upload a new patch at the earliest.

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda , Some of the test failures seems to be related to the patch also would merge checkAndUpdateAMContainerDiagnostics and updateAMContainerDiagnostics with additional parameter rather than another method. Will upload a new patch at the earliest.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Tan, Wangda,
          Attaching a patch with corrections for test case and removing duplicate method in SchedulerApplicationAttempt

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda , Attaching a patch with corrections for test case and removing duplicate method in SchedulerApplicationAttempt
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 6 new or modified test files.
          +1 mvninstall 10m 8s trunk passed
          +1 compile 0m 43s trunk passed with JDK v1.8.0_66
          +1 compile 0m 40s trunk passed with JDK v1.7.0_85
          +1 checkstyle 0m 16s trunk passed
          +1 mvnsite 0m 48s trunk passed
          +1 mvneclipse 0m 19s trunk passed
          +1 findbugs 1m 33s trunk passed
          +1 javadoc 0m 33s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 36s trunk passed with JDK v1.7.0_85
          +1 mvninstall 0m 46s the patch passed
          +1 compile 0m 42s the patch passed with JDK v1.8.0_66
          +1 javac 0m 42s the patch passed
          +1 compile 0m 40s the patch passed with JDK v1.7.0_85
          +1 javac 0m 40s the patch passed
          -1 checkstyle 0m 16s Patch generated 21 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 653, now 670).
          +1 mvnsite 0m 48s the patch passed
          +1 mvneclipse 0m 18s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 42s the patch passed
          +1 javadoc 0m 33s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 35s the patch passed with JDK v1.7.0_85
          -1 unit 65m 59s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 65m 22s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_85.
          +1 asflicense 0m 31s Patch does not generate ASF License warnings.
          155m 20s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
          JDK v1.7.0_85 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12775817/YARN-3946.v1.006.patch
          JIRA Issue YARN-3946
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux b3298f9be10e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / cbc7b6b
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9865/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9865/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9865/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9865/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9865/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt
          JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9865/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 76MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9865/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 6 new or modified test files. +1 mvninstall 10m 8s trunk passed +1 compile 0m 43s trunk passed with JDK v1.8.0_66 +1 compile 0m 40s trunk passed with JDK v1.7.0_85 +1 checkstyle 0m 16s trunk passed +1 mvnsite 0m 48s trunk passed +1 mvneclipse 0m 19s trunk passed +1 findbugs 1m 33s trunk passed +1 javadoc 0m 33s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 36s trunk passed with JDK v1.7.0_85 +1 mvninstall 0m 46s the patch passed +1 compile 0m 42s the patch passed with JDK v1.8.0_66 +1 javac 0m 42s the patch passed +1 compile 0m 40s the patch passed with JDK v1.7.0_85 +1 javac 0m 40s the patch passed -1 checkstyle 0m 16s Patch generated 21 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 653, now 670). +1 mvnsite 0m 48s the patch passed +1 mvneclipse 0m 18s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 42s the patch passed +1 javadoc 0m 33s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 35s the patch passed with JDK v1.7.0_85 -1 unit 65m 59s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 65m 22s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_85. +1 asflicense 0m 31s Patch does not generate ASF License warnings. 155m 20s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_85 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12775817/YARN-3946.v1.006.patch JIRA Issue YARN-3946 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux b3298f9be10e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / cbc7b6b findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9865/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9865/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9865/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9865/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9865/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9865/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 76MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9865/console This message was automatically generated.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Tan, Wangda, Test case failures are not related to the patch and the valid checkstyle issues are already handled

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda , Test case failures are not related to the patch and the valid checkstyle issues are already handled
          Hide
          leftnoteasy Wangda Tan added a comment -

          Naganarasimha G R, thanks for updating the patch. Some minor comments:
          1) Not sure if this is needed:
          AMRegisteredTransition:

                // reset AMLaunchDiagnostics once AM Registers with RM but in case of
                // Unmanaged AM we keep the diagnostic message till the attempt is
                // finished
                if (! appAttempt.submissionContext.getUnmanagedAM()) {
                  appAttempt.updateAMLaunchDiagnostics(null);
                }
          

          My feeling is we should set AMLaunchDiagnostics no matter AM is managed or not. Thoughts?

          2) RMAppImpl:
          getCurrentAppAttempt().getDiagnostics() is called twice.

          3) FiCaSchedulerApp:
          Suggest to rename

          public void updateNodeInfoForAMDiagnostics(String message)

          To "updateAppSkipNodeDiagnostics", IIUC, it will be called when app skips allocations.

          4) It's better to update the patch to avoid hard coded message (especially when you need to verify them in test). Is it make sense to create a AMContainerLaunchDiagnostics at yarn.scheduler (for general launch diagnostics if you have) and CSAMContainerLaunchDiagnostics at yarn.scheduler.capacity? LeafQueue.USER_S_AM_RESOURCE_LIMIT_EXCEED can be removed as well.

          Show
          leftnoteasy Wangda Tan added a comment - Naganarasimha G R , thanks for updating the patch. Some minor comments: 1) Not sure if this is needed: AMRegisteredTransition: // reset AMLaunchDiagnostics once AM Registers with RM but in case of // Unmanaged AM we keep the diagnostic message till the attempt is // finished if (! appAttempt.submissionContext.getUnmanagedAM()) { appAttempt.updateAMLaunchDiagnostics( null ); } My feeling is we should set AMLaunchDiagnostics no matter AM is managed or not. Thoughts? 2) RMAppImpl: getCurrentAppAttempt().getDiagnostics() is called twice. 3) FiCaSchedulerApp: Suggest to rename public void updateNodeInfoForAMDiagnostics(String message) To "updateAppSkipNodeDiagnostics", IIUC, it will be called when app skips allocations. 4) It's better to update the patch to avoid hard coded message (especially when you need to verify them in test). Is it make sense to create a AMContainerLaunchDiagnostics at yarn.scheduler (for general launch diagnostics if you have) and CSAMContainerLaunchDiagnostics at yarn.scheduler.capacity? LeafQueue.USER_S_AM_RESOURCE_LIMIT_EXCEED can be removed as well.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Naganarasimha G R, thanks for updating the patch. Some minor comments:
          1) Not sure if this is needed:
          AMRegisteredTransition:

                // reset AMLaunchDiagnostics once AM Registers with RM but in case of
                // Unmanaged AM we keep the diagnostic message till the attempt is
                // finished
                if (! appAttempt.submissionContext.getUnmanagedAM()) {
                  appAttempt.updateAMLaunchDiagnostics(null);
                }
          

          My feeling is we should set AMLaunchDiagnostics no matter AM is managed or not. Thoughts?

          2) RMAppImpl:
          getCurrentAppAttempt().getDiagnostics() is called twice.

          3) FiCaSchedulerApp:
          Suggest to rename

          public void updateNodeInfoForAMDiagnostics(String message)

          To "updateAppSkipNodeDiagnostics", IIUC, it will be called when app skips allocations.

          4) It's better to update the patch to avoid hard coded message (especially when you need to verify them in test). Is it make sense to create a AMContainerLaunchDiagnostics at yarn.scheduler (for general launch diagnostics if you have) and CSAMContainerLaunchDiagnostics at yarn.scheduler.capacity? LeafQueue.USER_S_AM_RESOURCE_LIMIT_EXCEED can be removed as well.

          Show
          leftnoteasy Wangda Tan added a comment - Naganarasimha G R , thanks for updating the patch. Some minor comments: 1) Not sure if this is needed: AMRegisteredTransition: // reset AMLaunchDiagnostics once AM Registers with RM but in case of // Unmanaged AM we keep the diagnostic message till the attempt is // finished if (! appAttempt.submissionContext.getUnmanagedAM()) { appAttempt.updateAMLaunchDiagnostics( null ); } My feeling is we should set AMLaunchDiagnostics no matter AM is managed or not. Thoughts? 2) RMAppImpl: getCurrentAppAttempt().getDiagnostics() is called twice. 3) FiCaSchedulerApp: Suggest to rename public void updateNodeInfoForAMDiagnostics(String message) To "updateAppSkipNodeDiagnostics", IIUC, it will be called when app skips allocations. 4) It's better to update the patch to avoid hard coded message (especially when you need to verify them in test). Is it make sense to create a AMContainerLaunchDiagnostics at yarn.scheduler (for general launch diagnostics if you have) and CSAMContainerLaunchDiagnostics at yarn.scheduler.capacity? LeafQueue.USER_S_AM_RESOURCE_LIMIT_EXCEED can be removed as well.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Tan, Wangda,
          I have incorporated the changes suggested by you. Please take a look

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda , I have incorporated the changes suggested by you. Please take a look
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 6 new or modified test files.
          +1 mvninstall 8m 4s trunk passed
          +1 compile 0m 31s trunk passed with JDK v1.8.0_66
          +1 compile 0m 33s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 14s trunk passed
          +1 mvnsite 0m 39s trunk passed
          +1 mvneclipse 0m 16s trunk passed
          +1 findbugs 1m 15s trunk passed
          -1 javadoc 0m 25s hadoop-yarn-server-resourcemanager in trunk failed with JDK v1.8.0_66.
          +1 javadoc 0m 29s trunk passed with JDK v1.7.0_91
          +1 mvninstall 0m 36s the patch passed
          +1 compile 0m 30s the patch passed with JDK v1.8.0_66
          +1 javac 0m 30s the patch passed
          +1 compile 0m 31s the patch passed with JDK v1.7.0_91
          +1 javac 0m 31s the patch passed
          -1 checkstyle 0m 14s Patch generated 28 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 654, now 678).
          +1 mvnsite 0m 40s the patch passed
          +1 mvneclipse 0m 16s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 28s the patch passed
          -1 javadoc 0m 24s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          +1 javadoc 0m 30s the patch passed with JDK v1.7.0_91
          -1 unit 67m 55s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 68m 28s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 24s Patch does not generate ASF License warnings.
          155m 36s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
          JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA
            hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12776385/YARN-3946.v1.007.patch
          JIRA Issue YARN-3946
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 4448588a9785 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 832b3cb
          findbugs v3.0.0
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/9904/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9904/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/9904/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9904/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9904/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9904/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9904/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9904/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 76MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9904/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 6 new or modified test files. +1 mvninstall 8m 4s trunk passed +1 compile 0m 31s trunk passed with JDK v1.8.0_66 +1 compile 0m 33s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 14s trunk passed +1 mvnsite 0m 39s trunk passed +1 mvneclipse 0m 16s trunk passed +1 findbugs 1m 15s trunk passed -1 javadoc 0m 25s hadoop-yarn-server-resourcemanager in trunk failed with JDK v1.8.0_66. +1 javadoc 0m 29s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 36s the patch passed +1 compile 0m 30s the patch passed with JDK v1.8.0_66 +1 javac 0m 30s the patch passed +1 compile 0m 31s the patch passed with JDK v1.7.0_91 +1 javac 0m 31s the patch passed -1 checkstyle 0m 14s Patch generated 28 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 654, now 678). +1 mvnsite 0m 40s the patch passed +1 mvneclipse 0m 16s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 28s the patch passed -1 javadoc 0m 24s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. +1 javadoc 0m 30s the patch passed with JDK v1.7.0_91 -1 unit 67m 55s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 68m 28s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 24s Patch does not generate ASF License warnings. 155m 36s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12776385/YARN-3946.v1.007.patch JIRA Issue YARN-3946 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 4448588a9785 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 832b3cb findbugs v3.0.0 javadoc https://builds.apache.org/job/PreCommit-YARN-Build/9904/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9904/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/9904/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9904/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9904/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9904/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9904/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9904/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 76MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9904/console This message was automatically generated.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Naganarasimha G R,

          Thanks for update, tried patch locally, the latest patch looks good, could you check if javadoc and test failures are related?

          Show
          leftnoteasy Wangda Tan added a comment - Naganarasimha G R , Thanks for update, tried patch locally, the latest patch looks good, could you check if javadoc and test failures are related?
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks for pointing it out Tan, Wangda, have corrected the test case and the applicable and correct checkstyle issues

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks for pointing it out Tan, Wangda , have corrected the test case and the applicable and correct checkstyle issues
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 6 new or modified test files.
          +1 mvninstall 8m 29s trunk passed
          +1 compile 0m 36s trunk passed with JDK v1.8.0_66
          +1 compile 0m 33s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 14s trunk passed
          +1 mvnsite 0m 39s trunk passed
          +1 mvneclipse 0m 16s trunk passed
          +1 findbugs 1m 22s trunk passed
          -1 javadoc 0m 28s hadoop-yarn-server-resourcemanager in trunk failed with JDK v1.8.0_66.
          +1 javadoc 0m 29s trunk passed with JDK v1.7.0_91
          +1 mvninstall 0m 36s the patch passed
          +1 compile 0m 34s the patch passed with JDK v1.8.0_66
          +1 javac 0m 34s the patch passed
          +1 compile 0m 35s the patch passed with JDK v1.7.0_91
          +1 javac 0m 35s the patch passed
          -1 checkstyle 0m 15s Patch generated 18 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 654, now 668).
          +1 mvnsite 0m 43s the patch passed
          +1 mvneclipse 0m 16s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 29s the patch passed
          -1 javadoc 0m 25s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          +1 javadoc 0m 0s the patch passed
          -1 unit 70m 21s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 71m 6s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 23s Patch does not generate ASF License warnings.
          161m 0s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.security.TestRMDelegationTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
          JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12776868/YARN-3946.v1.008.patch
          JIRA Issue YARN-3946
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 632d164907e4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 21daa6c
          findbugs v3.0.0
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/9931/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9931/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/9931/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9931/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9931/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9931/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9931/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9931/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 76MB
          Powered by Apache Yetus 0.1.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9931/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 6 new or modified test files. +1 mvninstall 8m 29s trunk passed +1 compile 0m 36s trunk passed with JDK v1.8.0_66 +1 compile 0m 33s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 14s trunk passed +1 mvnsite 0m 39s trunk passed +1 mvneclipse 0m 16s trunk passed +1 findbugs 1m 22s trunk passed -1 javadoc 0m 28s hadoop-yarn-server-resourcemanager in trunk failed with JDK v1.8.0_66. +1 javadoc 0m 29s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 36s the patch passed +1 compile 0m 34s the patch passed with JDK v1.8.0_66 +1 javac 0m 34s the patch passed +1 compile 0m 35s the patch passed with JDK v1.7.0_91 +1 javac 0m 35s the patch passed -1 checkstyle 0m 15s Patch generated 18 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 654, now 668). +1 mvnsite 0m 43s the patch passed +1 mvneclipse 0m 16s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 29s the patch passed -1 javadoc 0m 25s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. +1 javadoc 0m 0s the patch passed -1 unit 70m 21s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 71m 6s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 23s Patch does not generate ASF License warnings. 161m 0s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.security.TestRMDelegationTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12776868/YARN-3946.v1.008.patch JIRA Issue YARN-3946 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 632d164907e4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 21daa6c findbugs v3.0.0 javadoc https://builds.apache.org/job/PreCommit-YARN-Build/9931/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9931/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/9931/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9931/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9931/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9931/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9931/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9931/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 76MB Powered by Apache Yetus 0.1.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9931/console This message was automatically generated.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Tan, Wangda,
          Java doc, checkstyle and unit test case issues are either not related to the patch or not valid to be taken care

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Tan, Wangda , Java doc, checkstyle and unit test case issues are either not related to the patch or not valid to be taken care
          Hide
          leftnoteasy Wangda Tan added a comment -

          Looks good, +1. Thanks Naganarasimha G R, will wait a few days to see if anybody wants to take a look at the patch.

          Show
          leftnoteasy Wangda Tan added a comment - Looks good, +1. Thanks Naganarasimha G R , will wait a few days to see if anybody wants to take a look at the patch.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Committed to trunk/branch-2,

          Thanks Naganarasimha G R and review from Varun Saxena/Sumit Nigam/Rohith Sharma K S/Steve Loughran!

          Show
          leftnoteasy Wangda Tan added a comment - Committed to trunk/branch-2, Thanks Naganarasimha G R and review from Varun Saxena / Sumit Nigam / Rohith Sharma K S / Steve Loughran !
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #8962 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8962/)
          YARN-3946. Update exact reason as to why a submitted app is in ACCEPTED (wangda: rev 6cb0af3c39a5d49cb2f7911ee21363a9542ca2d7)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSAMContainerLaunchDiagnosticsConstants.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestNodeLabelContainerAllocation.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimitsByPartition.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #8962 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8962/ ) YARN-3946 . Update exact reason as to why a submitted app is in ACCEPTED (wangda: rev 6cb0af3c39a5d49cb2f7911ee21363a9542ca2d7) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSAMContainerLaunchDiagnosticsConstants.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestNodeLabelContainerAllocation.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimitsByPartition.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks for the revew and commit Tan, Wangda.

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks for the revew and commit Tan, Wangda .
          Hide
          hudson Hudson added a comment -

          ABORTED: Integrated in Hadoop-Hdfs-trunk-Java8 #692 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/692/)
          YARN-3946. Update exact reason as to why a submitted app is in ACCEPTED (wangda: rev 6cb0af3c39a5d49cb2f7911ee21363a9542ca2d7)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSAMContainerLaunchDiagnosticsConstants.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestNodeLabelContainerAllocation.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimitsByPartition.java
          Show
          hudson Hudson added a comment - ABORTED: Integrated in Hadoop-Hdfs-trunk-Java8 #692 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/692/ ) YARN-3946 . Update exact reason as to why a submitted app is in ACCEPTED (wangda: rev 6cb0af3c39a5d49cb2f7911ee21363a9542ca2d7) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSAMContainerLaunchDiagnosticsConstants.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestNodeLabelContainerAllocation.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacitySchedulerPlanFollower.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimitsByPartition.java
          Hide
          leftnoteasy Wangda Tan added a comment -

          Committed to branch-2.8.

          Show
          leftnoteasy Wangda Tan added a comment - Committed to branch-2.8.
          Hide
          ajisakaa Akira Ajisaka added a comment -

          Hi Naganarasimha G R and Wangda Tan, TestNetworkedJob is failing after this issue. Would you please review the patch in MAPREDUCE-6579?

          Show
          ajisakaa Akira Ajisaka added a comment - Hi Naganarasimha G R and Wangda Tan , TestNetworkedJob is failing after this issue. Would you please review the patch in MAPREDUCE-6579 ?
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          I will take a look and update, Thanks for informing !

          Show
          Naganarasimha Naganarasimha G R added a comment - I will take a look and update, Thanks for informing !
          Hide
          ajisakaa Akira Ajisaka added a comment -

          Hi Wangda Tan, TestNetworkedJob is still failing. Would you please review the patch in MAPREDUCE-6579?

          Show
          ajisakaa Akira Ajisaka added a comment - Hi Wangda Tan , TestNetworkedJob is still failing. Would you please review the patch in MAPREDUCE-6579 ?
          Hide
          sunilg Sunil G added a comment -

          Hi Naganarasimha Garla

          Could you please give a patch for branch 2.7 also? It seems useful there. cc/Wangda Tan

          Show
          sunilg Sunil G added a comment - Hi Naganarasimha Garla Could you please give a patch for branch 2.7 also? It seems useful there. cc/ Wangda Tan

            People

            • Assignee:
              Naganarasimha Naganarasimha G R
              Reporter:
              sumit.nigam Sumit Nigam
            • Votes:
              0 Vote for this issue
              Watchers:
              16 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development