Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-1197 Support changing resources of an allocated container
  3. YARN-1509

Make AMRMClient support send increase container request and get increased/decreased containers

    Details

    • Type: Sub-task
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: resourcemanager
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      As described in YARN-1197, we need add API in AMRMClient to support
      1) Add increase request
      2) Can get successfully increased/decreased containers from RM

      1. YARN-1509.1.patch
        36 kB
        MENG DING
      2. YARN-1509.10.patch
        55 kB
        MENG DING
      3. YARN-1509.2.patch
        36 kB
        MENG DING
      4. YARN-1509.3.patch
        36 kB
        MENG DING
      5. YARN-1509.4.patch
        36 kB
        MENG DING
      6. YARN-1509.5.patch
        36 kB
        MENG DING
      7. YARN-1509.6.patch
        54 kB
        MENG DING
      8. YARN-1509.7.patch
        56 kB
        MENG DING
      9. YARN-1509.8.patch
        55 kB
        MENG DING
      10. YARN-1509.9.patch
        54 kB
        MENG DING

        Issue Links

          Activity

          Hide
          mding MENG DING added a comment -

          Submit the first patch for review.

          Wangda Tan, recall during design stage we discussed the requirement for an AMRMClient API to get the latest approved increase request. I think at that time the reason was because we want to get the latest approved increase request and use that to poll NM to see if the increase has been completed or not. But since we have changed the increase action on NM to be blocking, I can't think of any real use case of this API anymore. What do you think?

          Show
          mding MENG DING added a comment - Submit the first patch for review. Wangda Tan , recall during design stage we discussed the requirement for an AMRMClient API to get the latest approved increase request. I think at that time the reason was because we want to get the latest approved increase request and use that to poll NM to see if the increase has been completed or not. But since we have changed the increase action on NM to be blocking, I can't think of any real use case of this API anymore. What do you think?
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 18m 27s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 2 new or modified test files.
          +1 javac 7m 55s There were no new javac warning messages.
          +1 javadoc 10m 16s There were no new javadoc warning messages.
          +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings.
          -1 checkstyle 1m 8s The applied patch generated 5 new checkstyle issues (total was 79, now 84).
          +1 whitespace 0m 10s The patch has no lines that end in whitespace.
          +1 install 1m 28s mvn install still works.
          +1 eclipse:eclipse 0m 32s The patch built with eclipse:eclipse.
          +1 findbugs 3m 3s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
          -1 yarn tests 1m 17s Tests failed in hadoop-yarn-applications-distributedshell.
          +1 yarn tests 7m 28s Tests passed in hadoop-yarn-client.
          -1 yarn tests 56m 16s Tests failed in hadoop-yarn-server-resourcemanager.
              108m 56s  



          Reason Tests
          Failed unit tests hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels
            hadoop.yarn.applications.distributedshell.TestDistributedShell
            hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12764082/YARN-1509.2.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / 50741cb
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9290/artifact/patchprocess/diffcheckstylehadoop-yarn-client.txt
          hadoop-yarn-applications-distributedshell test log https://builds.apache.org/job/PreCommit-YARN-Build/9290/artifact/patchprocess/testrun_hadoop-yarn-applications-distributedshell.txt
          hadoop-yarn-client test log https://builds.apache.org/job/PreCommit-YARN-Build/9290/artifact/patchprocess/testrun_hadoop-yarn-client.txt
          hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/9290/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9290/testReport/
          Java 1.7.0_55
          uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9290/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 18m 27s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 2 new or modified test files. +1 javac 7m 55s There were no new javac warning messages. +1 javadoc 10m 16s There were no new javadoc warning messages. +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 1m 8s The applied patch generated 5 new checkstyle issues (total was 79, now 84). +1 whitespace 0m 10s The patch has no lines that end in whitespace. +1 install 1m 28s mvn install still works. +1 eclipse:eclipse 0m 32s The patch built with eclipse:eclipse. +1 findbugs 3m 3s The patch does not introduce any new Findbugs (version 3.0.0) warnings. -1 yarn tests 1m 17s Tests failed in hadoop-yarn-applications-distributedshell. +1 yarn tests 7m 28s Tests passed in hadoop-yarn-client. -1 yarn tests 56m 16s Tests failed in hadoop-yarn-server-resourcemanager.     108m 56s   Reason Tests Failed unit tests hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels   hadoop.yarn.applications.distributedshell.TestDistributedShell   hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12764082/YARN-1509.2.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 50741cb checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9290/artifact/patchprocess/diffcheckstylehadoop-yarn-client.txt hadoop-yarn-applications-distributedshell test log https://builds.apache.org/job/PreCommit-YARN-Build/9290/artifact/patchprocess/testrun_hadoop-yarn-applications-distributedshell.txt hadoop-yarn-client test log https://builds.apache.org/job/PreCommit-YARN-Build/9290/artifact/patchprocess/testrun_hadoop-yarn-client.txt hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/9290/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9290/testReport/ Java 1.7.0_55 uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/9290/console This message was automatically generated.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Thanks MENG DING,

          I think patch generally looks good, one query:

                    // increase/decrease requests could have been added during the
                    // allocate call. Those are the newest requests which take precedence
                    // over requests cached in the increaseList and decreaseList.
                    //
                    // Only insert entries from the cached increaseList and decreaseList
                    // that do not exist in either of current decrease and increase maps:
                    // 1. If the cached increaseList contains the same container as that
                    //    in the new increase map, then there is nothing to do as the
                    //    the request in the new increase map has the latest value.
                    // 2. If the cached increaseList contains the same container as that
                    //    in the new decrease map, then there is nothing to do either as
                    //    the request in the new decrease map is newer and should cancel
                    //    the old increase request.
                    // 3. The above also apply to the decreaseList.
                    for (ContainerResourceChangeRequest oldIncrease : increaseList) {
                      ContainerId oldContainerId = oldIncrease.getContainerId();
                      if (increase.get(oldContainerId) == null
                          && decrease.get(oldContainerId) == null) {
                        increase.put(oldContainerId, oldIncrease.getCapability());
                      }
                    }
                    for (ContainerResourceChangeRequest oldDecrease : decreaseList) {
                      ContainerId oldContainerId = oldDecrease.getContainerId();
                      if (decrease.get(oldContainerId) == null
                          && increase.get(oldContainerId) == null) {
                        decrease.put(oldContainerId, oldDecrease.getCapability());
                      }
                    }
          

          I think we can simply add decreaseList to decrease and increaseList to increase. If AllocateResponse == null, we assume allocation fails, and scheduler's increase/decrease table isn't updated. In this case, I think we should simply revert changes to increase/decrease table. Thoughts?

          And I think we can add some debug/info message as well. For example, at removePendingChangeRequests, if request matches, we can print some logs to show this.

          Show
          leftnoteasy Wangda Tan added a comment - Thanks MENG DING , I think patch generally looks good, one query: // increase/decrease requests could have been added during the // allocate call. Those are the newest requests which take precedence // over requests cached in the increaseList and decreaseList. // // Only insert entries from the cached increaseList and decreaseList // that do not exist in either of current decrease and increase maps: // 1. If the cached increaseList contains the same container as that // in the new increase map, then there is nothing to do as the // the request in the new increase map has the latest value. // 2. If the cached increaseList contains the same container as that // in the new decrease map, then there is nothing to do either as // the request in the new decrease map is newer and should cancel // the old increase request. // 3. The above also apply to the decreaseList. for (ContainerResourceChangeRequest oldIncrease : increaseList) { ContainerId oldContainerId = oldIncrease.getContainerId(); if (increase.get(oldContainerId) == null && decrease.get(oldContainerId) == null ) { increase.put(oldContainerId, oldIncrease.getCapability()); } } for (ContainerResourceChangeRequest oldDecrease : decreaseList) { ContainerId oldContainerId = oldDecrease.getContainerId(); if (decrease.get(oldContainerId) == null && increase.get(oldContainerId) == null ) { decrease.put(oldContainerId, oldDecrease.getCapability()); } } I think we can simply add decreaseList to decrease and increaseList to increase. If AllocateResponse == null, we assume allocation fails, and scheduler's increase/decrease table isn't updated. In this case, I think we should simply revert changes to increase/decrease table. Thoughts? And I think we can add some debug/info message as well. For example, at removePendingChangeRequests , if request matches, we can print some logs to show this.
          Hide
          mding MENG DING added a comment -

          Thanks for the review Wangda Tan!

          I think we can simply add decreaseList to decrease and increaseList to increase.

          In most cases, the current logic effectively adds decreaseList to decrease map, and increaseList to increase map. But since the allocate call allocateResponse = allocate(progressIndicator) is not synchronized, during the allocation, new increase/decrease requests may have been added to the increase/decrease table, which IMO should not be overwritten by the old requests cached in increaseList and decreaseList. This is similar to the new container requests logic when allocation fails. Let me know if you think otherwise.

          bq, if request matches, we can print some logs to show this

          Will do.

          Show
          mding MENG DING added a comment - Thanks for the review Wangda Tan ! I think we can simply add decreaseList to decrease and increaseList to increase. In most cases, the current logic effectively adds decreaseList to decrease map, and increaseList to increase map. But since the allocate call allocateResponse = allocate(progressIndicator) is not synchronized, during the allocation, new increase/decrease requests may have been added to the increase/decrease table, which IMO should not be overwritten by the old requests cached in increaseList and decreaseList. This is similar to the new container requests logic when allocation fails. Let me know if you think otherwise. bq, if request matches, we can print some logs to show this Will do.
          Hide
          mding MENG DING added a comment -

          Attaching latest patch:

          • Added AbstractCallbackHandler in AMRMClientAsync
          • Added more debug logs

          Wangda Tan, do you have any question/concern regarding my previous response?

          Show
          mding MENG DING added a comment - Attaching latest patch: Added AbstractCallbackHandler in AMRMClientAsync Added more debug logs Wangda Tan , do you have any question/concern regarding my previous response?
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 16m 52s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 2 new or modified test files.
          +1 javac 8m 5s There were no new javac warning messages.
          +1 javadoc 10m 29s There were no new javadoc warning messages.
          -1 release audit 0m 17s The applied patch generated 1 release audit warnings.
          -1 checkstyle 0m 36s The applied patch generated 5 new checkstyle issues (total was 79, now 78).
          -1 whitespace 0m 5s The patch has 4 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 install 1m 29s mvn install still works.
          +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse.
          +1 findbugs 0m 56s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
          +1 yarn tests 7m 26s Tests passed in hadoop-yarn-client.
              46m 52s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12764874/YARN-1509.3.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / fdf02d1
          Release Audit https://builds.apache.org/job/PreCommit-YARN-Build/9338/artifact/patchprocess/patchReleaseAuditProblems.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9338/artifact/patchprocess/diffcheckstylehadoop-yarn-client.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/9338/artifact/patchprocess/whitespace.txt
          hadoop-yarn-client test log https://builds.apache.org/job/PreCommit-YARN-Build/9338/artifact/patchprocess/testrun_hadoop-yarn-client.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9338/testReport/
          Java 1.7.0_55
          uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9338/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 16m 52s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 2 new or modified test files. +1 javac 8m 5s There were no new javac warning messages. +1 javadoc 10m 29s There were no new javadoc warning messages. -1 release audit 0m 17s The applied patch generated 1 release audit warnings. -1 checkstyle 0m 36s The applied patch generated 5 new checkstyle issues (total was 79, now 78). -1 whitespace 0m 5s The patch has 4 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 install 1m 29s mvn install still works. +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse. +1 findbugs 0m 56s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 yarn tests 7m 26s Tests passed in hadoop-yarn-client.     46m 52s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12764874/YARN-1509.3.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / fdf02d1 Release Audit https://builds.apache.org/job/PreCommit-YARN-Build/9338/artifact/patchprocess/patchReleaseAuditProblems.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9338/artifact/patchprocess/diffcheckstylehadoop-yarn-client.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/9338/artifact/patchprocess/whitespace.txt hadoop-yarn-client test log https://builds.apache.org/job/PreCommit-YARN-Build/9338/artifact/patchprocess/testrun_hadoop-yarn-client.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9338/testReport/ Java 1.7.0_55 uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/9338/console This message was automatically generated.
          Hide
          mding MENG DING added a comment -

          Submit the new patch that fixes the whitespace issue

          Show
          mding MENG DING added a comment - Submit the new patch that fixes the whitespace issue
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 16m 17s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 2 new or modified test files.
          +1 javac 7m 59s There were no new javac warning messages.
          +1 javadoc 10m 18s There were no new javadoc warning messages.
          -1 release audit 0m 15s The applied patch generated 1 release audit warnings.
          -1 checkstyle 0m 30s The applied patch generated 5 new checkstyle issues (total was 79, now 78).
          +1 whitespace 0m 8s The patch has no lines that end in whitespace.
          +1 install 1m 30s mvn install still works.
          +1 eclipse:eclipse 0m 36s The patch built with eclipse:eclipse.
          +1 findbugs 0m 53s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
          +1 yarn tests 7m 31s Tests passed in hadoop-yarn-client.
              46m 1s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12765011/YARN-1509.4.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / b925cf1
          Release Audit https://builds.apache.org/job/PreCommit-YARN-Build/9346/artifact/patchprocess/patchReleaseAuditProblems.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9346/artifact/patchprocess/diffcheckstylehadoop-yarn-client.txt
          hadoop-yarn-client test log https://builds.apache.org/job/PreCommit-YARN-Build/9346/artifact/patchprocess/testrun_hadoop-yarn-client.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9346/testReport/
          Java 1.7.0_55
          uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9346/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 16m 17s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 2 new or modified test files. +1 javac 7m 59s There were no new javac warning messages. +1 javadoc 10m 18s There were no new javadoc warning messages. -1 release audit 0m 15s The applied patch generated 1 release audit warnings. -1 checkstyle 0m 30s The applied patch generated 5 new checkstyle issues (total was 79, now 78). +1 whitespace 0m 8s The patch has no lines that end in whitespace. +1 install 1m 30s mvn install still works. +1 eclipse:eclipse 0m 36s The patch built with eclipse:eclipse. +1 findbugs 0m 53s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 yarn tests 7m 31s Tests passed in hadoop-yarn-client.     46m 1s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12765011/YARN-1509.4.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / b925cf1 Release Audit https://builds.apache.org/job/PreCommit-YARN-Build/9346/artifact/patchprocess/patchReleaseAuditProblems.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9346/artifact/patchprocess/diffcheckstylehadoop-yarn-client.txt hadoop-yarn-client test log https://builds.apache.org/job/PreCommit-YARN-Build/9346/artifact/patchprocess/testrun_hadoop-yarn-client.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9346/testReport/ Java 1.7.0_55 uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/9346/console This message was automatically generated.
          Hide
          mding MENG DING added a comment -
          • release audit is not related
          • will apply for exception for checkstyle:
            • relaxed visibility is for testing purposes.
            • function length exceeding limit is caused by long comments.
          Show
          mding MENG DING added a comment - release audit is not related will apply for exception for checkstyle: relaxed visibility is for testing purposes. function length exceeding limit is caused by long comments.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Thanks MENG DING, I think patch looks good, and your response makes sense to me. One nit is: could you wrap Log.debug with Log.isDebugEnabled? Will commit once Jenkins get back.

          Show
          leftnoteasy Wangda Tan added a comment - Thanks MENG DING , I think patch looks good, and your response makes sense to me. One nit is: could you wrap Log.debug with Log.isDebugEnabled? Will commit once Jenkins get back.
          Hide
          mding MENG DING added a comment -

          Thanks Wangda Tan. Attaching the patch that addresses the comments.

          Show
          mding MENG DING added a comment - Thanks Wangda Tan . Attaching the patch that addresses the comments.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 16m 52s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 2 new or modified test files.
          +1 javac 8m 3s There were no new javac warning messages.
          +1 javadoc 10m 28s There were no new javadoc warning messages.
          -1 release audit 0m 19s The applied patch generated 1 release audit warnings.
          -1 checkstyle 0m 28s The applied patch generated 5 new checkstyle issues (total was 79, now 78).
          +1 whitespace 0m 8s The patch has no lines that end in whitespace.
          +1 install 1m 32s mvn install still works.
          +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse.
          +1 findbugs 0m 52s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
          -1 yarn tests 7m 23s Tests failed in hadoop-yarn-client.
              46m 45s  



          Reason Tests
          Failed unit tests hadoop.yarn.client.api.impl.TestYarnClient



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12765393/YARN-1509.5.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / 61b3547
          Release Audit https://builds.apache.org/job/PreCommit-YARN-Build/9371/artifact/patchprocess/patchReleaseAuditProblems.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9371/artifact/patchprocess/diffcheckstylehadoop-yarn-client.txt
          hadoop-yarn-client test log https://builds.apache.org/job/PreCommit-YARN-Build/9371/artifact/patchprocess/testrun_hadoop-yarn-client.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9371/testReport/
          Java 1.7.0_55
          uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9371/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 16m 52s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 2 new or modified test files. +1 javac 8m 3s There were no new javac warning messages. +1 javadoc 10m 28s There were no new javadoc warning messages. -1 release audit 0m 19s The applied patch generated 1 release audit warnings. -1 checkstyle 0m 28s The applied patch generated 5 new checkstyle issues (total was 79, now 78). +1 whitespace 0m 8s The patch has no lines that end in whitespace. +1 install 1m 32s mvn install still works. +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse. +1 findbugs 0m 52s The patch does not introduce any new Findbugs (version 3.0.0) warnings. -1 yarn tests 7m 23s Tests failed in hadoop-yarn-client.     46m 45s   Reason Tests Failed unit tests hadoop.yarn.client.api.impl.TestYarnClient Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12765393/YARN-1509.5.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 61b3547 Release Audit https://builds.apache.org/job/PreCommit-YARN-Build/9371/artifact/patchprocess/patchReleaseAuditProblems.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9371/artifact/patchprocess/diffcheckstylehadoop-yarn-client.txt hadoop-yarn-client test log https://builds.apache.org/job/PreCommit-YARN-Build/9371/artifact/patchprocess/testrun_hadoop-yarn-client.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9371/testReport/ Java 1.7.0_55 uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/9371/console This message was automatically generated.
          Hide
          mding MENG DING added a comment -

          Test failure is not related to this patch.
          Checkstyle warning is the same as before.

          Show
          mding MENG DING added a comment - Test failure is not related to this patch. Checkstyle warning is the same as before.
          Hide
          bikassaha Bikas Saha added a comment -

          Sorry for coming in late on this. I have some questions on the API.

          Why are there separate methods for increase and decrease instead of a single method to change the container resource size? By comparing the existing resource allocation to a container and the new requested resource allocation, it should be clear whether an increase or decrease is being requested.

          Also, for completeness, is there a need for a cancelContainerResourceChange()? After a container resource change request has been submitted, what are my options as a user other than to wait for the request to be satisfied by the RM?

          If I release the container, then does it mean all pending change requests for that container should be removed? From a quick look at the patch, it does not look like that is being covered, unless I am missing something.

          What will happen if the AM restarts after submitting a change request. Does the AM-RM re-register protocol need an update to handle the case of re-synchronizing on the change requests? Whats happens if the RM restarts? If these are explained in a document, then please point me to the document. The patch did not seem to have anything around this area. So I thought I would ask.

          Also, why have the callback interface methods been made non-public? Would that be an incompatible change?

          Show
          bikassaha Bikas Saha added a comment - Sorry for coming in late on this. I have some questions on the API. Why are there separate methods for increase and decrease instead of a single method to change the container resource size? By comparing the existing resource allocation to a container and the new requested resource allocation, it should be clear whether an increase or decrease is being requested. Also, for completeness, is there a need for a cancelContainerResourceChange()? After a container resource change request has been submitted, what are my options as a user other than to wait for the request to be satisfied by the RM? If I release the container, then does it mean all pending change requests for that container should be removed? From a quick look at the patch, it does not look like that is being covered, unless I am missing something. What will happen if the AM restarts after submitting a change request. Does the AM-RM re-register protocol need an update to handle the case of re-synchronizing on the change requests? Whats happens if the RM restarts? If these are explained in a document, then please point me to the document. The patch did not seem to have anything around this area. So I thought I would ask. Also, why have the callback interface methods been made non-public? Would that be an incompatible change?
          Hide
          mding MENG DING added a comment -

          Hi, Bikas Saha

          Thanks a lot for the valuable comments!

          Why are there separate methods for increase and decrease instead of a single method to change the container resource size? By comparing the existing resource allocation to a container and the new requested resource allocation, it should be clear whether an increase or decrease is being requested.

          As discussed in the design stage, and also described in the design doc, the reason to separate the increase/decrease requests in the APIs and AMRM protocol is to make sure that users will make a conscious decision when they are making these requests. It is also much easier to catch any potential mistakes that the user could make. For example, if a user intends to increase resource of a container, but for whatever reason mistakenly specifies a target resource that is smaller than the current resource, RM can catch that and throw exception.

          Also, for completeness, is there a need for a cancelContainerResourceChange()? After a container resource change request has been submitted, what are my options as a user other than to wait for the request to be satisfied by the RM?

          For container resource decrease request, there is practically no chance (and probably no need) to cancel the request, as it happens immediately when scheduler process the request (this is similar to the release container request). For container resource increase, the user can cancel any pending increase request still sitting in RM by sending a decrease request of the same size of the current container size. I will improve the Javadoc description to make it clear on this.

          If I release the container, then does it mean all pending change requests for that container should be removed? From a quick look at the patch, it does not look like that is being covered, unless I am missing something.

          You are right that releasing a container should cancel all pending change requests for that container. This is missing in the current implementation, I will add that.

          What will happen if the AM restarts after submitting a change request. Does the AM-RM re-register protocol need an update to handle the case of re-synchronizing on the change requests? Whats happens if the RM restarts? If these are explained in a document, then please point me to the document. The patch did not seem to have anything around this area. So I thought I would ask

          The current implementation handles RM restarts by maintaining a pendingIncrease and pendingDecrease map, just like the pendingRelease list. This is covered in the design doc.
          For AM restarts, I am not sure what we need to do here. Does AM-RM re-register protocol currently handle the re-synchronize of outstanding new container requests after AM is restarted? Will you be able to elaborate a little bit on this?

          Also, why have the callback interface methods been made non-public? Would that be an incompatible change?

          All interface methods are implicitly public and abstract. The existing public modifier on these methods are redundant, so I removed them.

          Show
          mding MENG DING added a comment - Hi, Bikas Saha Thanks a lot for the valuable comments! Why are there separate methods for increase and decrease instead of a single method to change the container resource size? By comparing the existing resource allocation to a container and the new requested resource allocation, it should be clear whether an increase or decrease is being requested. As discussed in the design stage, and also described in the design doc, the reason to separate the increase/decrease requests in the APIs and AMRM protocol is to make sure that users will make a conscious decision when they are making these requests. It is also much easier to catch any potential mistakes that the user could make. For example, if a user intends to increase resource of a container, but for whatever reason mistakenly specifies a target resource that is smaller than the current resource, RM can catch that and throw exception. Also, for completeness, is there a need for a cancelContainerResourceChange()? After a container resource change request has been submitted, what are my options as a user other than to wait for the request to be satisfied by the RM? For container resource decrease request, there is practically no chance (and probably no need) to cancel the request, as it happens immediately when scheduler process the request (this is similar to the release container request). For container resource increase, the user can cancel any pending increase request still sitting in RM by sending a decrease request of the same size of the current container size. I will improve the Javadoc description to make it clear on this. If I release the container, then does it mean all pending change requests for that container should be removed? From a quick look at the patch, it does not look like that is being covered, unless I am missing something. You are right that releasing a container should cancel all pending change requests for that container. This is missing in the current implementation, I will add that. What will happen if the AM restarts after submitting a change request. Does the AM-RM re-register protocol need an update to handle the case of re-synchronizing on the change requests? Whats happens if the RM restarts? If these are explained in a document, then please point me to the document. The patch did not seem to have anything around this area. So I thought I would ask The current implementation handles RM restarts by maintaining a pendingIncrease and pendingDecrease map, just like the pendingRelease list. This is covered in the design doc. For AM restarts, I am not sure what we need to do here. Does AM-RM re-register protocol currently handle the re-synchronize of outstanding new container requests after AM is restarted? Will you be able to elaborate a little bit on this? Also, why have the callback interface methods been made non-public? Would that be an incompatible change? All interface methods are implicitly public and abstract. The existing public modifier on these methods are redundant, so I removed them.
          Hide
          mding MENG DING added a comment -

          Had an offline discussion with Wangda Tan and Bikas Saha. Overall we agreed that we can combine the separate increase/decrease requests into one API in the client:

          • Combine requestContainerResourceIncrease and requestContainerResourceDecrease into one API. For example:
              /**
               * Request container resource change before calling <code>allocate</code>.
               * Any previous pending resource change request of the same container will be
               * cancelled.
               *
               * @param container The container returned from the last successful resource
               *                  allocation or resource change
               * @param capability  The target resource capability of the container
               */
              public abstract void requestContainerResourceChange(
                  Container container, Resource capability);
            

            User must pass in a container object (instead of just a container ID), and the target resource capability. Because the container object contains the existing container Resource, the AMRMClient can use that information to compare against the target resource to figure out if this is an increase or decrease request.

          • There is NO need to change the AMRM protocol.
          • For the CallbackHandler methods, we can also combine onContainersResourceDecreased and onContainersResourceIncreased into one API:
                public abstract void onContainersResourceChanged(
                    List<Container> containers);
            

            The user can compare the passed-in containers with the containers they have remembered to determine if this is an increase or decrease request. Or maybe we can make it even simpler by doing something like the following? Thoughts?

                public abstract void onContainersResourceChanged(
                    List<Container> increasedContainers,  List<Container> decreasedContainers);
            
          • We can deprecate the existing CallbackHandler interface and use the AbstractCallbackHandler instead.

          Bikas Saha, Wangda Tan, any comments?

          Show
          mding MENG DING added a comment - Had an offline discussion with Wangda Tan and Bikas Saha . Overall we agreed that we can combine the separate increase/decrease requests into one API in the client: Combine requestContainerResourceIncrease and requestContainerResourceDecrease into one API. For example: /** * Request container resource change before calling <code>allocate</code>. * Any previous pending resource change request of the same container will be * cancelled. * * @param container The container returned from the last successful resource * allocation or resource change * @param capability The target resource capability of the container */ public abstract void requestContainerResourceChange( Container container, Resource capability); User must pass in a container object (instead of just a container ID), and the target resource capability. Because the container object contains the existing container Resource, the AMRMClient can use that information to compare against the target resource to figure out if this is an increase or decrease request. There is NO need to change the AMRM protocol. For the CallbackHandler methods, we can also combine onContainersResourceDecreased and onContainersResourceIncreased into one API: public abstract void onContainersResourceChanged( List<Container> containers); The user can compare the passed-in containers with the containers they have remembered to determine if this is an increase or decrease request. Or maybe we can make it even simpler by doing something like the following? Thoughts? public abstract void onContainersResourceChanged( List<Container> increasedContainers, List<Container> decreasedContainers); We can deprecate the existing CallbackHandler interface and use the AbstractCallbackHandler instead. Bikas Saha , Wangda Tan , any comments?
          Hide
          bikassaha Bikas Saha added a comment -

          A change container request (maybe not supported now) can be increase cpu + decrease memory. Hence a built in concept of increase and decrease in the API is something I am wary off.
          So how about

           public abstract void onContainersResourceChanged(
                  Map<Container,Container> oldToNewContainers); 
          OR 
          public abstract void onContainersResourceChanged(
                  List<UpdatedContainerInfo>  updatedContainerInfo);

          Would there be a case (maybe not currently) when a change container request can fail on the RM? Should the callback allow notifying about a failure to change the container?
          What is the RM notifies AMRMClient about a container completed. That container happens to have a pending change request? What should happen in this case? Should the AMRM client clear that pending request? Should it also notify the user that pending container change request has failed or just rely on onContainerCompleted() to let the AM get that information.

          I would be wary of overloading cancel with a second container change request. To be clear, here we are discussing user facing semantics and API. Having clear semantics is important vs implicit or overloaded behavior. E.g. are there cases where an increase followed by a decrease request is a valid scenario and how would that be different compare to an increase followed by a cancel. Should the RM do different things for increase followed by cancel vs increase followed by decrease?

          AM restart does not need any handling since the AM is going to start from a clean slate. Sorry, my bad.

          I missed the handling of the RM restart case. Is there an existing test for that code path that could be augmented to make sure that the new changes are tested?

          Show
          bikassaha Bikas Saha added a comment - A change container request (maybe not supported now) can be increase cpu + decrease memory. Hence a built in concept of increase and decrease in the API is something I am wary off. So how about public abstract void onContainersResourceChanged( Map<Container,Container> oldToNewContainers); OR public abstract void onContainersResourceChanged( List<UpdatedContainerInfo> updatedContainerInfo); Would there be a case (maybe not currently) when a change container request can fail on the RM? Should the callback allow notifying about a failure to change the container? What is the RM notifies AMRMClient about a container completed. That container happens to have a pending change request? What should happen in this case? Should the AMRM client clear that pending request? Should it also notify the user that pending container change request has failed or just rely on onContainerCompleted() to let the AM get that information. I would be wary of overloading cancel with a second container change request. To be clear, here we are discussing user facing semantics and API. Having clear semantics is important vs implicit or overloaded behavior. E.g. are there cases where an increase followed by a decrease request is a valid scenario and how would that be different compare to an increase followed by a cancel. Should the RM do different things for increase followed by cancel vs increase followed by decrease? AM restart does not need any handling since the AM is going to start from a clean slate. Sorry, my bad. I missed the handling of the RM restart case. Is there an existing test for that code path that could be augmented to make sure that the new changes are tested?
          Hide
          mding MENG DING added a comment -

          Hi, Bikas Saha

          Apologize for the late response, I was out traveling and just came back.

          A change container request (maybe not supported now) can be increase cpu + decrease memory. Hence a built in concept of increase and decrease in the API is something I am wary off

          From the design stage of this project, I believe the semantics of "changing container resource" was meant to be either "increase" or "decrease", this was re-enforced with the design choices that successful increase and decrease of resource go through different paths. I have some concerns of extending the semantics to something like "increase cpu + decrease memory" inside one change request:

          • Decrease resource happens immediately, while increase resource involves handing out a token together with a user action to increase on NM. If we extend the semantics, we need to educate the user that once a change request is approved, it means that the decrease part of the request is effective immediately, while the increase part of the request is still pending on user action. Could it be too confusing?
          • To make matters worse, if the increase token expires and the RM rolls back the allocation of the increase part of the request, we end up with a partially fulfilled request, as we are not able to rollback the decrease part of the request.

          IMHO, it is much cleaner to clearly separate increase and decrease requests at the user API level. If a user wants to increase cpu and decrease memory, he should send out two separate requests. Thoughts?

          So how about

          public abstract void onContainersResourceChanged(Map<Container,Container> oldToNewContainers); 
          OR
          public abstract void onContainersResourceChanged(List<UpdatedContainerInfo> updatedContainerInfo);

          I thought about providing the old containers in the callback method. Right now AMRMClientImpl remembers old containers in the pendingChange map, but the problem is, in the AMRMClientImpl.allocate call, once an increase/decrease approval is received, the old containers are immediately removed from the pending map. So by the time the AMRMClientAsyncImpl callback handler thread starts to process the response, the old containers won't be there any more:

          +        if (!pendingIncrease.isEmpty() && !allocateResponse.getIncreasedContainers().isEmpty()) {
          +          removePendingChangeRequests(allocateResponse.getDecreasedContainers(), true);
          +        }
          +        if (!pendingDecrease.isEmpty() && !allocateResponse.getDecreasedContainers().isEmpty()) {
          +          removePendingChangeRequests(allocateResponse.getDecreasedContainers(), false);
          +        }
          

          My thought is, since we already ask the user to provide the old container when he sends out the change request, he should have the old container already, so we don't necessarily have to provide the old container info in the callback method. Thoughts?

          Would there be a case (maybe not currently) when a change container request can fail on the RM? Should the callback allow notifying about a failure to change the container?

          The AbstractCallbackHandler.onError will be called when the change container request throws exception on the RM side.

          What is the RM notifies AMRMClient about a container completed. That container happens to have a pending change request? What should happen in this case? Should the AMRM client clear that pending request? Should it also notify the user that pending container change request has failed or just rely on onContainerCompleted() to let the AM get that information.

          I think in this case AMRMClient should clear all pending requests that belong to this container. I will add that logic in. Thanks!

          I would be wary of overloading cancel with a second container change request. To be clear, here we are discussing user facing semantics and API. Having clear semantics is important vs implicit or overloaded behavior.

          I am not against providing a separate cancel API. But I think the API needs to be clear that the cancel is only for increase request, NOT decrease request (just like we don't have something like cancel release container). For example, we can have something like the following. Thoughts?

            public abstract void cancelContainerResourceIncrease(Container container)
          

          Is there an existing test for that code path that could be augmented to make sure that the new changes are tested?

          I didn't find existing tests that test the pending list on RM restart, I will try to add a test case for that. Thanks.

          Show
          mding MENG DING added a comment - Hi, Bikas Saha Apologize for the late response, I was out traveling and just came back. A change container request (maybe not supported now) can be increase cpu + decrease memory. Hence a built in concept of increase and decrease in the API is something I am wary off From the design stage of this project, I believe the semantics of "changing container resource" was meant to be either "increase" or "decrease", this was re-enforced with the design choices that successful increase and decrease of resource go through different paths. I have some concerns of extending the semantics to something like "increase cpu + decrease memory" inside one change request: Decrease resource happens immediately, while increase resource involves handing out a token together with a user action to increase on NM. If we extend the semantics, we need to educate the user that once a change request is approved, it means that the decrease part of the request is effective immediately, while the increase part of the request is still pending on user action. Could it be too confusing? To make matters worse, if the increase token expires and the RM rolls back the allocation of the increase part of the request, we end up with a partially fulfilled request, as we are not able to rollback the decrease part of the request. IMHO, it is much cleaner to clearly separate increase and decrease requests at the user API level. If a user wants to increase cpu and decrease memory, he should send out two separate requests. Thoughts? So how about public abstract void onContainersResourceChanged(Map<Container,Container> oldToNewContainers); OR public abstract void onContainersResourceChanged(List<UpdatedContainerInfo> updatedContainerInfo); I thought about providing the old containers in the callback method. Right now AMRMClientImpl remembers old containers in the pendingChange map, but the problem is, in the AMRMClientImpl.allocate call, once an increase/decrease approval is received, the old containers are immediately removed from the pending map. So by the time the AMRMClientAsyncImpl callback handler thread starts to process the response, the old containers won't be there any more: + if (!pendingIncrease.isEmpty() && !allocateResponse.getIncreasedContainers().isEmpty()) { + removePendingChangeRequests(allocateResponse.getDecreasedContainers(), true ); + } + if (!pendingDecrease.isEmpty() && !allocateResponse.getDecreasedContainers().isEmpty()) { + removePendingChangeRequests(allocateResponse.getDecreasedContainers(), false ); + } My thought is, since we already ask the user to provide the old container when he sends out the change request, he should have the old container already, so we don't necessarily have to provide the old container info in the callback method. Thoughts? Would there be a case (maybe not currently) when a change container request can fail on the RM? Should the callback allow notifying about a failure to change the container? The AbstractCallbackHandler.onError will be called when the change container request throws exception on the RM side. What is the RM notifies AMRMClient about a container completed. That container happens to have a pending change request? What should happen in this case? Should the AMRM client clear that pending request? Should it also notify the user that pending container change request has failed or just rely on onContainerCompleted() to let the AM get that information. I think in this case AMRMClient should clear all pending requests that belong to this container. I will add that logic in. Thanks! I would be wary of overloading cancel with a second container change request. To be clear, here we are discussing user facing semantics and API. Having clear semantics is important vs implicit or overloaded behavior. I am not against providing a separate cancel API. But I think the API needs to be clear that the cancel is only for increase request, NOT decrease request (just like we don't have something like cancel release container). For example, we can have something like the following. Thoughts? public abstract void cancelContainerResourceIncrease(Container container) Is there an existing test for that code path that could be augmented to make sure that the new changes are tested? I didn't find existing tests that test the pending list on RM restart, I will try to add a test case for that. Thanks.
          Hide
          mding MENG DING added a comment -

          I didn't find existing tests that test the pending list on RM restart, I will try to add a test case for that

          Correction, I found an existing test (i.e. testAMRMClientResendsRequestsOnRMRestart) that test the pending list on RM restart. Will augment that test to test pendingChange map.

          Show
          mding MENG DING added a comment - I didn't find existing tests that test the pending list on RM restart, I will try to add a test case for that Correction, I found an existing test (i.e. testAMRMClientResendsRequestsOnRMRestart) that test the pending list on RM restart. Will augment that test to test pendingChange map.
          Hide
          bikassaha Bikas Saha added a comment -

          Increase/Decrease/Change

          Not sure why the implementation went down that route of creating a separation of increase vs decrease throughout the flow. In any case, that is the back-end implementation. That can change and evolve without affecting user code. This is the user facing API. So once code is written against this API making backwards incompatible changes or providing new functionality in the future needs to be considered. Also just API simplicity needs to be considered.
          1) Having a changeRequest API allows us to support other kinds of changes (mix of increase or decrease) at a later point. For now, the API could check for increase/decrease (which it has to do anyways for sanity checking) and reject unsupported scenarios.
          2) Even for just increase or decrease, given the user can provide the old Container + new Request Size, it should be easy for the library to figure out if an increase or decrease is needed. Why burden the user by having them call 2 different API's that are essentially making the same request.
          Having to handle container token for increase but do nothing for decrease is something that already has to be explained to the user. Same for the fact that decrease is quick but not increase. So those aspects of user education are probably orthogonal.

          We have iterated on this API question a few times now. In the above 2 points I have tried to summarize the reasons for requesting to change instead of increase/decrease. Even if we dont support both increase and decrease (point 1) I think the simplicity of having a single API (point 2) would be an advantage of having change vs increase/decrease. This also simplifies to having a single callback onContainerResourceChanged() instead of 2 callbacks.

          At this point, I will request you and Wangda Tan to consider both future extensions and API simplicity to make a final call on having 2 APIs and 2 callbacks vs 1.

          My thought is, since we already ask the user to provide the old container when he sends out the change request, he should have the old container already, so we don't necessarily have to provide the old container info in the callback method

          I am fine either ways. I discussed the same thing offline with Wangda Tan and he thought that having the old container info would make life easier for the user. A user who is using this API is very likely going to have state about the container for which a change has been requested and can match them up using the containerId.

          The AbstractCallbackHandler.onError will be called when the change container request throws exception on the RM side.

          The change request is sent inside the allocate heartbeat request, right? So i am not sure how we get an exception back for the specific case of a failed container change request. Or are you saying that invalid container resource change requests are immediately rejected by the RM synchronously in the allocate RPC?

          I am not against providing a separate cancel API. But I think the API needs to be clear that the cancel is only for increase request, NOT decrease request (just like we don't have something like cancel release container).

          Having a simple cancel request regardless of increase or decrease is preferable since then we are not leaking the current state of the implementation to the user. It is future safe E.g. later if we find an issue with decreasing and fixing that makes it non-instantaneous, then we dont want to have to change the API to support that. But today, given that it is instantaneous, we can simply ignore the cancellation of a decrease in the cancel method of AMRMClient. I think the RM does not support a cancel container resource change request. Does it? If it does not, then perhaps this API can be discussed/added in a separate jira after there is back end support for it.

          Show
          bikassaha Bikas Saha added a comment - Increase/Decrease/Change Not sure why the implementation went down that route of creating a separation of increase vs decrease throughout the flow. In any case, that is the back-end implementation. That can change and evolve without affecting user code. This is the user facing API. So once code is written against this API making backwards incompatible changes or providing new functionality in the future needs to be considered. Also just API simplicity needs to be considered. 1) Having a changeRequest API allows us to support other kinds of changes (mix of increase or decrease) at a later point. For now, the API could check for increase/decrease (which it has to do anyways for sanity checking) and reject unsupported scenarios. 2) Even for just increase or decrease, given the user can provide the old Container + new Request Size, it should be easy for the library to figure out if an increase or decrease is needed. Why burden the user by having them call 2 different API's that are essentially making the same request. Having to handle container token for increase but do nothing for decrease is something that already has to be explained to the user. Same for the fact that decrease is quick but not increase. So those aspects of user education are probably orthogonal. We have iterated on this API question a few times now. In the above 2 points I have tried to summarize the reasons for requesting to change instead of increase/decrease. Even if we dont support both increase and decrease (point 1) I think the simplicity of having a single API (point 2) would be an advantage of having change vs increase/decrease. This also simplifies to having a single callback onContainerResourceChanged() instead of 2 callbacks. At this point, I will request you and Wangda Tan to consider both future extensions and API simplicity to make a final call on having 2 APIs and 2 callbacks vs 1. My thought is, since we already ask the user to provide the old container when he sends out the change request, he should have the old container already, so we don't necessarily have to provide the old container info in the callback method I am fine either ways. I discussed the same thing offline with Wangda Tan and he thought that having the old container info would make life easier for the user. A user who is using this API is very likely going to have state about the container for which a change has been requested and can match them up using the containerId. The AbstractCallbackHandler.onError will be called when the change container request throws exception on the RM side. The change request is sent inside the allocate heartbeat request, right? So i am not sure how we get an exception back for the specific case of a failed container change request. Or are you saying that invalid container resource change requests are immediately rejected by the RM synchronously in the allocate RPC? I am not against providing a separate cancel API. But I think the API needs to be clear that the cancel is only for increase request, NOT decrease request (just like we don't have something like cancel release container). Having a simple cancel request regardless of increase or decrease is preferable since then we are not leaking the current state of the implementation to the user. It is future safe E.g. later if we find an issue with decreasing and fixing that makes it non-instantaneous, then we dont want to have to change the API to support that. But today, given that it is instantaneous, we can simply ignore the cancellation of a decrease in the cancel method of AMRMClient. I think the RM does not support a cancel container resource change request. Does it? If it does not, then perhaps this API can be discussed/added in a separate jira after there is back end support for it.
          Hide
          mding MENG DING added a comment -

          Hi, Bikas Saha

          Thanks for the comments.

          I probably didn't make myself clear. We are on the SAME page that for the sake of point 2 alone, it already makes sense to combine increase/decrease API into one change API:

          public abstract void requestContainerResourceChange(Container container, Resource capability);


          What I was trying to say in the previous post is that to support mix of increase and decrease in one change request (point 1) doesn't seem to be very feasible (even at a later date). But I don't think we need to worry about that for now.

          Since we are combining increase/decrease API, we definitely should combine the callback methods into one as well: onContainerResourceChanged(). At this point, I am inclined to simply do the following which doesn't incur much code changes. I will discuss further with Wangda Tan on this.

          public abstract void onContainersResourceChanged(List<Container> containers);

          Or are you saying that invalid container resource change requests are immediately rejected by the RM synchronously in the allocate RPC?

          Yes, the ApplicationMasterService will perform a series of sanity checks (e.g., requested resource <= maximum allocation, etc), and reject invalid requests immediately. This is the same for other requests too.

          Having a simple cancel request regardless of increase or decrease is preferable since then we are not leaking the current state of the implementation to the user. It is future safe

          Make sense to me. We can probably have something like cancelContainerResourceChange(Container container), which applies to the container that has an outstanding pending increase sitting in the pendingChange map. There is no explicit protocol to support cancellation of resource change yet. For now we can achieve that by issuing a backend decrease request with the target resource set to the same as the current resource, which effectively cancels any outstanding increase request.

          Show
          mding MENG DING added a comment - Hi, Bikas Saha Thanks for the comments. I probably didn't make myself clear. We are on the SAME page that for the sake of point 2 alone, it already makes sense to combine increase/decrease API into one change API: public abstract void requestContainerResourceChange(Container container, Resource capability); What I was trying to say in the previous post is that to support mix of increase and decrease in one change request (point 1) doesn't seem to be very feasible (even at a later date). But I don't think we need to worry about that for now. Since we are combining increase/decrease API, we definitely should combine the callback methods into one as well: onContainerResourceChanged(). At this point, I am inclined to simply do the following which doesn't incur much code changes. I will discuss further with Wangda Tan on this. public abstract void onContainersResourceChanged(List<Container> containers); Or are you saying that invalid container resource change requests are immediately rejected by the RM synchronously in the allocate RPC? Yes, the ApplicationMasterService will perform a series of sanity checks (e.g., requested resource <= maximum allocation, etc), and reject invalid requests immediately. This is the same for other requests too. Having a simple cancel request regardless of increase or decrease is preferable since then we are not leaking the current state of the implementation to the user. It is future safe Make sense to me. We can probably have something like cancelContainerResourceChange(Container container) , which applies to the container that has an outstanding pending increase sitting in the pendingChange map. There is no explicit protocol to support cancellation of resource change yet. For now we can achieve that by issuing a backend decrease request with the target resource set to the same as the current resource, which effectively cancels any outstanding increase request.
          Hide
          bikassaha Bikas Saha added a comment -

          Does issuing an increase followed by the decrease actually remove the pending change request on the RM or will it cause the RM to try to change a container resource to the same size as the existing resource and then go down the code path of increase (new token) or decrease (update NM). This would be a corner case that would be good to double check. If that works and the RM actually removes the pending container change request then we could use this mechanism to implement a cancel method wrapper in the AMRMClient. Otherwise, if fixes are needed on the RM side, we could so it separately when we fix the RM.

          Show
          bikassaha Bikas Saha added a comment - Does issuing an increase followed by the decrease actually remove the pending change request on the RM or will it cause the RM to try to change a container resource to the same size as the existing resource and then go down the code path of increase (new token) or decrease (update NM). This would be a corner case that would be good to double check. If that works and the RM actually removes the pending container change request then we could use this mechanism to implement a cancel method wrapper in the AMRMClient. Otherwise, if fixes are needed on the RM side, we could so it separately when we fix the RM.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Sync up with MENG DING. I think we're all on the same page now. Some thoughts from my side:

          • We can separate cancel container resource request change to a separated JIRA. Now using decrease container to same size to cancel pending increase request is like a workaround, we'd better add a protocol to AMRMProtocol so AMRMClient can do things correct.
          • For the onContainerResourceChanged API
            public abstract void onContainersResourceChanged(List<Container> containers);
            

            This looks good to me. Since we have a unified API to request change container, return the updated container to AM should be enough, AM knows if it's an increased or decreased container.

          Thoughts? Bikas Saha.

          Show
          leftnoteasy Wangda Tan added a comment - Sync up with MENG DING . I think we're all on the same page now. Some thoughts from my side: We can separate cancel container resource request change to a separated JIRA. Now using decrease container to same size to cancel pending increase request is like a workaround, we'd better add a protocol to AMRMProtocol so AMRMClient can do things correct. For the onContainerResourceChanged API public abstract void onContainersResourceChanged(List<Container> containers); This looks good to me. Since we have a unified API to request change container, return the updated container to AM should be enough, AM knows if it's an increased or decreased container. Thoughts? Bikas Saha .
          Hide
          bikassaha Bikas Saha added a comment -

          Sounds good

          Show
          bikassaha Bikas Saha added a comment - Sounds good
          Hide
          mding MENG DING added a comment -

          Attaching new patch that address the following issues:

          • Combine increase/decrease requests into one method
          • Combine increase/decrease callback methods into one method
          • Deprecate the CallbackHandler interface and other related methods
          • Remove pending change requests of a container when that container is released, or is completed
          • Update related test cases
          • Add a test case to test recovery resource change requests on RM restart
          Show
          mding MENG DING added a comment - Attaching new patch that address the following issues: Combine increase/decrease requests into one method Combine increase/decrease callback methods into one method Deprecate the CallbackHandler interface and other related methods Remove pending change requests of a container when that container is released, or is completed Update related test cases Add a test case to test recovery resource change requests on RM restart
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 17m 18s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 3 new or modified test files.
          +1 javac 8m 2s There were no new javac warning messages.
          +1 javadoc 10m 44s There were no new javadoc warning messages.
          +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings.
          -1 checkstyle 0m 54s The applied patch generated 3 new checkstyle issues (total was 79, now 76).
          +1 whitespace 0m 12s The patch has no lines that end in whitespace.
          +1 install 1m 38s mvn install still works.
          +1 eclipse:eclipse 0m 35s The patch built with eclipse:eclipse.
          +1 findbugs 1m 40s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
          +1 yarn tests 7m 0s Tests passed in hadoop-yarn-applications-distributedshell.
          +1 yarn tests 7m 29s Tests passed in hadoop-yarn-client.
              55m 59s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12767154/YARN-1509.6.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / f9da5cd
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9467/artifact/patchprocess/diffcheckstylehadoop-yarn-client.txt
          hadoop-yarn-applications-distributedshell test log https://builds.apache.org/job/PreCommit-YARN-Build/9467/artifact/patchprocess/testrun_hadoop-yarn-applications-distributedshell.txt
          hadoop-yarn-client test log https://builds.apache.org/job/PreCommit-YARN-Build/9467/artifact/patchprocess/testrun_hadoop-yarn-client.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9467/testReport/
          Java 1.7.0_55
          uname Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9467/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 17m 18s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 3 new or modified test files. +1 javac 8m 2s There were no new javac warning messages. +1 javadoc 10m 44s There were no new javadoc warning messages. +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 0m 54s The applied patch generated 3 new checkstyle issues (total was 79, now 76). +1 whitespace 0m 12s The patch has no lines that end in whitespace. +1 install 1m 38s mvn install still works. +1 eclipse:eclipse 0m 35s The patch built with eclipse:eclipse. +1 findbugs 1m 40s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 yarn tests 7m 0s Tests passed in hadoop-yarn-applications-distributedshell. +1 yarn tests 7m 29s Tests passed in hadoop-yarn-client.     55m 59s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12767154/YARN-1509.6.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / f9da5cd checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9467/artifact/patchprocess/diffcheckstylehadoop-yarn-client.txt hadoop-yarn-applications-distributedshell test log https://builds.apache.org/job/PreCommit-YARN-Build/9467/artifact/patchprocess/testrun_hadoop-yarn-applications-distributedshell.txt hadoop-yarn-client test log https://builds.apache.org/job/PreCommit-YARN-Build/9467/artifact/patchprocess/testrun_hadoop-yarn-client.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9467/testReport/ Java 1.7.0_55 uname Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/9467/console This message was automatically generated.
          Hide
          mding MENG DING added a comment -

          Update the patch to include comments on deprecated interface/methods to refer to the new class/methods

          Show
          mding MENG DING added a comment - Update the patch to include comments on deprecated interface/methods to refer to the new class/methods
          Hide
          leftnoteasy Wangda Tan added a comment -

          MENG DING, thanks for update, generally looks good, only one comment is, is it possible to avoid adding dependency of commons-lang3? I'm afraid this new dependency add some risks of jar dependency conflicts, etc.

          I found the only place using commons-lang3 is Pair, could you take a look at if AbstractMap.SimpleEntry meets your requirements?

          Show
          leftnoteasy Wangda Tan added a comment - MENG DING , thanks for update, generally looks good, only one comment is, is it possible to avoid adding dependency of commons-lang3? I'm afraid this new dependency add some risks of jar dependency conflicts, etc. I found the only place using commons-lang3 is Pair, could you take a look at if AbstractMap.SimpleEntry meets your requirements?
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 17m 34s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 3 new or modified test files.
          +1 javac 8m 4s There were no new javac warning messages.
          +1 javadoc 10m 37s There were no new javadoc warning messages.
          +1 release audit 0m 25s The applied patch does not increase the total number of release audit warnings.
          -1 checkstyle 0m 47s The applied patch generated 3 new checkstyle issues (total was 79, now 75).
          +1 whitespace 0m 12s The patch has no lines that end in whitespace.
          +1 install 1m 31s mvn install still works.
          +1 eclipse:eclipse 0m 37s The patch built with eclipse:eclipse.
          +1 findbugs 1m 40s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
          +1 yarn tests 6m 56s Tests passed in hadoop-yarn-applications-distributedshell.
          +1 yarn tests 7m 38s Tests passed in hadoop-yarn-client.
              56m 6s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12768817/YARN-1509.7.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / 56e4f62
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9583/artifact/patchprocess/diffcheckstylehadoop-yarn-client.txt
          hadoop-yarn-applications-distributedshell test log https://builds.apache.org/job/PreCommit-YARN-Build/9583/artifact/patchprocess/testrun_hadoop-yarn-applications-distributedshell.txt
          hadoop-yarn-client test log https://builds.apache.org/job/PreCommit-YARN-Build/9583/artifact/patchprocess/testrun_hadoop-yarn-client.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9583/testReport/
          Java 1.7.0_55
          uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9583/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 17m 34s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 3 new or modified test files. +1 javac 8m 4s There were no new javac warning messages. +1 javadoc 10m 37s There were no new javadoc warning messages. +1 release audit 0m 25s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 0m 47s The applied patch generated 3 new checkstyle issues (total was 79, now 75). +1 whitespace 0m 12s The patch has no lines that end in whitespace. +1 install 1m 31s mvn install still works. +1 eclipse:eclipse 0m 37s The patch built with eclipse:eclipse. +1 findbugs 1m 40s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 yarn tests 6m 56s Tests passed in hadoop-yarn-applications-distributedshell. +1 yarn tests 7m 38s Tests passed in hadoop-yarn-client.     56m 6s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12768817/YARN-1509.7.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 56e4f62 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9583/artifact/patchprocess/diffcheckstylehadoop-yarn-client.txt hadoop-yarn-applications-distributedshell test log https://builds.apache.org/job/PreCommit-YARN-Build/9583/artifact/patchprocess/testrun_hadoop-yarn-applications-distributedshell.txt hadoop-yarn-client test log https://builds.apache.org/job/PreCommit-YARN-Build/9583/artifact/patchprocess/testrun_hadoop-yarn-client.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9583/testReport/ Java 1.7.0_55 uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/9583/console This message was automatically generated.
          Hide
          mding MENG DING added a comment -

          Thanks Wangda Tan for the comments. Your concern is valid. I have updated the patch to use AbstractMap.SimpleEntry instead.

          Show
          mding MENG DING added a comment - Thanks Wangda Tan for the comments. Your concern is valid. I have updated the patch to use AbstractMap.SimpleEntry instead.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 23m 38s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 3 new or modified test files.
          +1 javac 11m 14s There were no new javac warning messages.
          +1 javadoc 15m 22s There were no new javadoc warning messages.
          +1 release audit 0m 36s The applied patch does not increase the total number of release audit warnings.
          -1 checkstyle 1m 7s The applied patch generated 3 new checkstyle issues (total was 79, now 75).
          +1 whitespace 0m 14s The patch has no lines that end in whitespace.
          +1 install 2m 16s mvn install still works.
          +1 eclipse:eclipse 0m 51s The patch built with eclipse:eclipse.
          +1 findbugs 2m 22s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
          -1 yarn tests 6m 46s Tests failed in hadoop-yarn-applications-distributedshell.
          -1 yarn tests 8m 3s Tests failed in hadoop-yarn-client.
              72m 36s  



          Reason Tests
          Failed unit tests hadoop.yarn.applications.distributedshell.TestDistributedShell
            hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels
            hadoop.yarn.client.api.impl.TestYarnClient



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12769009/YARN-1509.8.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / ed9806e
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9590/artifact/patchprocess/diffcheckstylehadoop-yarn-client.txt
          hadoop-yarn-applications-distributedshell test log https://builds.apache.org/job/PreCommit-YARN-Build/9590/artifact/patchprocess/testrun_hadoop-yarn-applications-distributedshell.txt
          hadoop-yarn-client test log https://builds.apache.org/job/PreCommit-YARN-Build/9590/artifact/patchprocess/testrun_hadoop-yarn-client.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9590/testReport/
          Java 1.7.0_55
          uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9590/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 23m 38s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 3 new or modified test files. +1 javac 11m 14s There were no new javac warning messages. +1 javadoc 15m 22s There were no new javadoc warning messages. +1 release audit 0m 36s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 1m 7s The applied patch generated 3 new checkstyle issues (total was 79, now 75). +1 whitespace 0m 14s The patch has no lines that end in whitespace. +1 install 2m 16s mvn install still works. +1 eclipse:eclipse 0m 51s The patch built with eclipse:eclipse. +1 findbugs 2m 22s The patch does not introduce any new Findbugs (version 3.0.0) warnings. -1 yarn tests 6m 46s Tests failed in hadoop-yarn-applications-distributedshell. -1 yarn tests 8m 3s Tests failed in hadoop-yarn-client.     72m 36s   Reason Tests Failed unit tests hadoop.yarn.applications.distributedshell.TestDistributedShell   hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels   hadoop.yarn.client.api.impl.TestYarnClient Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12769009/YARN-1509.8.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / ed9806e checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9590/artifact/patchprocess/diffcheckstylehadoop-yarn-client.txt hadoop-yarn-applications-distributedshell test log https://builds.apache.org/job/PreCommit-YARN-Build/9590/artifact/patchprocess/testrun_hadoop-yarn-applications-distributedshell.txt hadoop-yarn-client test log https://builds.apache.org/job/PreCommit-YARN-Build/9590/artifact/patchprocess/testrun_hadoop-yarn-client.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9590/testReport/ Java 1.7.0_55 uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/9590/console This message was automatically generated.
          Hide
          mding MENG DING added a comment -

          The failed tests are not related.

          Show
          mding MENG DING added a comment - The failed tests are not related.
          Hide
          mding MENG DING added a comment -

          Thanks Jian He for reviewing the patch and giving feedback offline. To summarize, in the following function:

          AMRMClientImpl.java
          +  protected void removePendingChangeRequests(
          +      List<Container> changedContainers, boolean isIncrease) {
          +    for (Container changedContainer : changedContainers) {
          +      ContainerId containerId = changedContainer.getId();
          +      if (pendingChange.get(containerId) == null) {
          +        continue;
          +      }
          +      Resource target = pendingChange.get(containerId).getValue();
          +      if (target == null) {
          +        continue;
          +      }
          +      Resource changed = changedContainer.getResource();
          +      if (isIncrease) {
          +        if (Resources.fitsIn(target, changed)) {
          +          if (LOG.isDebugEnabled()) {
          +            LOG.debug("RM has confirmed increased resource allocation for "
          +                + "container " + containerId + ". Current resource allocation:"
          +                + changed + ". Remove pending change request:"
          +                + target);
          +          }
          +          pendingChange.remove(containerId);
          +        }
          +      } else {
          +        if (Resources.fitsIn(changed, target)) {
          +          if (LOG.isDebugEnabled()) {
          +            LOG.debug("RM has confirmed decreased resource allocation for "
          +                + "container " + containerId + ". Current resource allocation:"
          +                + changed + ". Remove pending change request:"
          +                + target);
          +          }
          +          pendingChange.remove(containerId);
          +        }
          +      }
          +    }
          +  }
          
          • There is no need to check null for target, as under no circumstance will it become null.
          • Better yet, there is even no need to compare changed with target. Because Resources.fitsIn(changed, target) will always be true for confirmed increase request, and same with Resources.fitsIn(changed, target) for confirmed decreased request. I added these checks originally to be defensive, but after all, there is really no need for them.

          Attaching latest patch that addresses the above.

          Show
          mding MENG DING added a comment - Thanks Jian He for reviewing the patch and giving feedback offline. To summarize, in the following function: AMRMClientImpl.java + protected void removePendingChangeRequests( + List<Container> changedContainers, boolean isIncrease) { + for (Container changedContainer : changedContainers) { + ContainerId containerId = changedContainer.getId(); + if (pendingChange.get(containerId) == null ) { + continue ; + } + Resource target = pendingChange.get(containerId).getValue(); + if (target == null ) { + continue ; + } + Resource changed = changedContainer.getResource(); + if (isIncrease) { + if (Resources.fitsIn(target, changed)) { + if (LOG.isDebugEnabled()) { + LOG.debug( "RM has confirmed increased resource allocation for " + + "container " + containerId + ". Current resource allocation:" + + changed + ". Remove pending change request:" + + target); + } + pendingChange.remove(containerId); + } + } else { + if (Resources.fitsIn(changed, target)) { + if (LOG.isDebugEnabled()) { + LOG.debug( "RM has confirmed decreased resource allocation for " + + "container " + containerId + ". Current resource allocation:" + + changed + ". Remove pending change request:" + + target); + } + pendingChange.remove(containerId); + } + } + } + } There is no need to check null for target , as under no circumstance will it become null. Better yet, there is even no need to compare changed with target . Because Resources.fitsIn(changed, target) will always be true for confirmed increase request, and same with Resources.fitsIn(changed, target) for confirmed decreased request. I added these checks originally to be defensive, but after all, there is really no need for them. Attaching latest patch that addresses the above.
          Hide
          mding MENG DING added a comment -

          Please ignore the previous patch, and see the latest one.

          Show
          mding MENG DING added a comment - Please ignore the previous patch, and see the latest one.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 7s docker + precommit patch detected.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 3m 53s trunk passed
          +1 compile 1m 5s trunk passed with JDK v1.8.0_60
          +1 compile 0m 59s trunk passed with JDK v1.7.0_79
          +1 checkstyle 0m 32s trunk passed
          +1 mvneclipse 0m 32s trunk passed
          +1 findbugs 1m 13s trunk passed
          +1 javadoc 0m 30s trunk passed with JDK v1.8.0_60
          +1 javadoc 0m 33s trunk passed with JDK v1.7.0_79
          -1 mvninstall 0m 13s hadoop-yarn-applications-distributedshell in the patch failed.
          +1 compile 0m 56s the patch passed with JDK v1.8.0_60
          +1 javac 0m 56s the patch passed
          +1 compile 0m 51s the patch passed with JDK v1.7.0_79
          +1 javac 0m 51s the patch passed
          -1 checkstyle 0m 28s Patch generated 3 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 124, now 120).
          +1 mvneclipse 0m 27s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 21s the patch passed
          +1 javadoc 0m 27s the patch passed with JDK v1.8.0_60
          +1 javadoc 0m 31s the patch passed with JDK v1.7.0_79
          -1 unit 16m 55s hadoop-yarn-applications-distributedshell in the patch failed with JDK v1.8.0_60.
          -1 unit 49m 33s hadoop-yarn-client in the patch failed with JDK v1.8.0_60.
          -1 unit 16m 57s hadoop-yarn-applications-distributedshell in the patch failed with JDK v1.7.0_79.
          -1 unit 49m 36s hadoop-yarn-client in the patch failed with JDK v1.7.0_79.
          +1 asflicense 0m 29s Patch does not generate ASF License warnings.
          149m 45s



          Reason Tests
          JDK v1.8.0_60 Failed junit tests hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels
            hadoop.yarn.client.TestGetGroups
          JDK v1.8.0_60 Timed out junit tests org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell
            org.apache.hadoop.yarn.client.api.impl.TestAMRMClient
            org.apache.hadoop.yarn.client.api.impl.TestYarnClient
            org.apache.hadoop.yarn.client.api.impl.TestNMClient
          JDK v1.7.0_79 Failed junit tests hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels
            hadoop.yarn.client.TestGetGroups
          JDK v1.7.0_79 Timed out junit tests org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell
            org.apache.hadoop.yarn.client.api.impl.TestAMRMClient
            org.apache.hadoop.yarn.client.api.impl.TestYarnClient
            org.apache.hadoop.yarn.client.api.impl.TestNMClient



          Subsystem Report/Notes
          Docker Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-11-03
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12770338/YARN-1509.9.patch
          JIRA Issue YARN-1509
          Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile
          uname Linux e0db4c49b346 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/apache-yetus-1a9afee/precommit/personality/hadoop.sh
          git revision trunk / 957f031
          Default Java 1.7.0_79
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_60 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79
          findbugs v3.0.0
          mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell-jdk1.8.0_60.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.8.0_60.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell-jdk1.7.0_79.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell-jdk1.7.0_79.txt https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt
          JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9619/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: hadoop-yarn-project/hadoop-yarn
          Max memory used 228MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9619/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 7s docker + precommit patch detected. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 3m 53s trunk passed +1 compile 1m 5s trunk passed with JDK v1.8.0_60 +1 compile 0m 59s trunk passed with JDK v1.7.0_79 +1 checkstyle 0m 32s trunk passed +1 mvneclipse 0m 32s trunk passed +1 findbugs 1m 13s trunk passed +1 javadoc 0m 30s trunk passed with JDK v1.8.0_60 +1 javadoc 0m 33s trunk passed with JDK v1.7.0_79 -1 mvninstall 0m 13s hadoop-yarn-applications-distributedshell in the patch failed. +1 compile 0m 56s the patch passed with JDK v1.8.0_60 +1 javac 0m 56s the patch passed +1 compile 0m 51s the patch passed with JDK v1.7.0_79 +1 javac 0m 51s the patch passed -1 checkstyle 0m 28s Patch generated 3 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 124, now 120). +1 mvneclipse 0m 27s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 21s the patch passed +1 javadoc 0m 27s the patch passed with JDK v1.8.0_60 +1 javadoc 0m 31s the patch passed with JDK v1.7.0_79 -1 unit 16m 55s hadoop-yarn-applications-distributedshell in the patch failed with JDK v1.8.0_60. -1 unit 49m 33s hadoop-yarn-client in the patch failed with JDK v1.8.0_60. -1 unit 16m 57s hadoop-yarn-applications-distributedshell in the patch failed with JDK v1.7.0_79. -1 unit 49m 36s hadoop-yarn-client in the patch failed with JDK v1.7.0_79. +1 asflicense 0m 29s Patch does not generate ASF License warnings. 149m 45s Reason Tests JDK v1.8.0_60 Failed junit tests hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels   hadoop.yarn.client.TestGetGroups JDK v1.8.0_60 Timed out junit tests org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient   org.apache.hadoop.yarn.client.api.impl.TestYarnClient   org.apache.hadoop.yarn.client.api.impl.TestNMClient JDK v1.7.0_79 Failed junit tests hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels   hadoop.yarn.client.TestGetGroups JDK v1.7.0_79 Timed out junit tests org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient   org.apache.hadoop.yarn.client.api.impl.TestYarnClient   org.apache.hadoop.yarn.client.api.impl.TestNMClient Subsystem Report/Notes Docker Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-11-03 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12770338/YARN-1509.9.patch JIRA Issue YARN-1509 Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile uname Linux e0db4c49b346 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/apache-yetus-1a9afee/precommit/personality/hadoop.sh git revision trunk / 957f031 Default Java 1.7.0_79 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_60 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 findbugs v3.0.0 mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell-jdk1.8.0_60.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.8.0_60.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell-jdk1.7.0_79.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell-jdk1.7.0_79.txt https://builds.apache.org/job/PreCommit-YARN-Build/9619/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9619/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: hadoop-yarn-project/hadoop-yarn Max memory used 228MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9619/console This message was automatically generated.
          Hide
          mding MENG DING added a comment -

          The test failure should be related to YARN-4326.

          In addition, the mvn install test script is flawed. It goes directly into hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell directory and does a mvn install which still uses the old local maven repo for yarn client. This causes the build to fail. The mvn install test should be done in the root hadoop directory.

          Tue Nov  3 16:18:38 UTC 2015
          cd /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
          mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-0 -DskipTests -fae clean install -DskipTests=true -Dmaven.javadoc.skip=true
          [INFO] Scanning for projects...
          ...
          ...
          [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-yarn-applications-distributedshell ---
          [INFO] Changes detected - recompiling the module!
          [INFO] Compiling 4 source files to /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/target/classes
          [INFO] -------------------------------------------------------------
          [ERROR] COMPILATION ERROR : 
          [INFO] -------------------------------------------------------------
          [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[735,50] cannot find symbol
            symbol:   class AbstractCallbackHandler
            location: class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
          [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[559,20] cannot find symbol
            symbol:   class AbstractCallbackHandler
            location: class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
          [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[737,5] method does not override or implement a method from a supertype
          [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[805,5] method does not override or implement a method from a supertype
          [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[838,5] method does not override or implement a method from a supertype
          [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[841,5] method does not override or implement a method from a supertype
          [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[846,5] method does not override or implement a method from a supertype
          [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[849,5] method does not override or implement a method from a supertype
          [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[857,5] method does not override or implement a method from a supertype
          [INFO] 9 errors 
          [INFO] -------------------------------------------------------------
          [INFO] ------------------------------------------------------------------------
          [INFO] BUILD FAILURE
          
          Show
          mding MENG DING added a comment - The test failure should be related to YARN-4326 . In addition, the mvn install test script is flawed. It goes directly into hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell directory and does a mvn install which still uses the old local maven repo for yarn client. This causes the build to fail. The mvn install test should be done in the root hadoop directory. Tue Nov 3 16:18:38 UTC 2015 cd /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-0 -DskipTests -fae clean install -DskipTests= true -Dmaven.javadoc.skip= true [INFO] Scanning for projects... ... ... [INFO] --- maven-compiler-plugin:3.1:compile ( default -compile) @ hadoop-yarn-applications-distributedshell --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 4 source files to /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/target/classes [INFO] ------------------------------------------------------------- [ERROR] COMPILATION ERROR : [INFO] ------------------------------------------------------------- [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[735,50] cannot find symbol symbol: class AbstractCallbackHandler location: class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[559,20] cannot find symbol symbol: class AbstractCallbackHandler location: class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[737,5] method does not override or implement a method from a supertype [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[805,5] method does not override or implement a method from a supertype [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[838,5] method does not override or implement a method from a supertype [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[841,5] method does not override or implement a method from a supertype [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[846,5] method does not override or implement a method from a supertype [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[849,5] method does not override or implement a method from a supertype [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java:[857,5] method does not override or implement a method from a supertype [INFO] 9 errors [INFO] ------------------------------------------------------------- [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 7s docker + precommit patch detected.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 3m 18s trunk passed
          +1 compile 0m 51s trunk passed with JDK v1.8.0_60
          +1 compile 0m 48s trunk passed with JDK v1.7.0_79
          +1 checkstyle 0m 27s trunk passed
          +1 mvneclipse 0m 30s trunk passed
          +1 findbugs 1m 8s trunk passed
          +1 javadoc 0m 32s trunk passed with JDK v1.8.0_60
          +1 javadoc 0m 35s trunk passed with JDK v1.7.0_79
          -1 mvninstall 0m 15s hadoop-yarn-applications-distributedshell in the patch failed.
          +1 compile 0m 55s the patch passed with JDK v1.8.0_60
          +1 javac 0m 55s the patch passed
          +1 compile 0m 50s the patch passed with JDK v1.7.0_79
          +1 javac 0m 50s the patch passed
          -1 checkstyle 0m 28s Patch generated 3 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 125, now 121).
          +1 mvneclipse 0m 30s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 17s the patch passed
          +1 javadoc 0m 28s the patch passed with JDK v1.8.0_60
          +1 javadoc 0m 30s the patch passed with JDK v1.7.0_79
          -1 unit 16m 54s hadoop-yarn-applications-distributedshell in the patch failed with JDK v1.8.0_60.
          -1 unit 49m 25s hadoop-yarn-client in the patch failed with JDK v1.8.0_60.
          -1 unit 16m 59s hadoop-yarn-applications-distributedshell in the patch failed with JDK v1.7.0_79.
          -1 unit 49m 34s hadoop-yarn-client in the patch failed with JDK v1.7.0_79.
          +1 asflicense 0m 30s Patch does not generate ASF License warnings.
          148m 27s



          Reason Tests
          JDK v1.8.0_60 Failed junit tests hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels
            hadoop.yarn.client.TestGetGroups
          JDK v1.8.0_60 Timed out junit tests org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell
            org.apache.hadoop.yarn.client.api.impl.TestYarnClient
            org.apache.hadoop.yarn.client.api.impl.TestAMRMClient
            org.apache.hadoop.yarn.client.api.impl.TestNMClient
          JDK v1.7.0_79 Failed junit tests hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels
            hadoop.yarn.client.TestGetGroups
          JDK v1.7.0_79 Timed out junit tests org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell
            org.apache.hadoop.yarn.client.api.impl.TestYarnClient
            org.apache.hadoop.yarn.client.api.impl.TestAMRMClient
            org.apache.hadoop.yarn.client.api.impl.TestNMClient



          Subsystem Report/Notes
          Docker Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-11-03
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12770343/YARN-1509.10.patch
          JIRA Issue YARN-1509
          Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile
          uname Linux f89fbd4dff15 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/apache-yetus-1a9afee/precommit/personality/hadoop.sh
          git revision trunk / 957f031
          Default Java 1.7.0_79
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_60 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79
          findbugs v3.0.0
          mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell-jdk1.8.0_60.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.8.0_60.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell-jdk1.7.0_79.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell-jdk1.7.0_79.txt https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt
          JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9620/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: hadoop-yarn-project/hadoop-yarn
          Max memory used 226MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9620/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 7s docker + precommit patch detected. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 3m 18s trunk passed +1 compile 0m 51s trunk passed with JDK v1.8.0_60 +1 compile 0m 48s trunk passed with JDK v1.7.0_79 +1 checkstyle 0m 27s trunk passed +1 mvneclipse 0m 30s trunk passed +1 findbugs 1m 8s trunk passed +1 javadoc 0m 32s trunk passed with JDK v1.8.0_60 +1 javadoc 0m 35s trunk passed with JDK v1.7.0_79 -1 mvninstall 0m 15s hadoop-yarn-applications-distributedshell in the patch failed. +1 compile 0m 55s the patch passed with JDK v1.8.0_60 +1 javac 0m 55s the patch passed +1 compile 0m 50s the patch passed with JDK v1.7.0_79 +1 javac 0m 50s the patch passed -1 checkstyle 0m 28s Patch generated 3 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 125, now 121). +1 mvneclipse 0m 30s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 17s the patch passed +1 javadoc 0m 28s the patch passed with JDK v1.8.0_60 +1 javadoc 0m 30s the patch passed with JDK v1.7.0_79 -1 unit 16m 54s hadoop-yarn-applications-distributedshell in the patch failed with JDK v1.8.0_60. -1 unit 49m 25s hadoop-yarn-client in the patch failed with JDK v1.8.0_60. -1 unit 16m 59s hadoop-yarn-applications-distributedshell in the patch failed with JDK v1.7.0_79. -1 unit 49m 34s hadoop-yarn-client in the patch failed with JDK v1.7.0_79. +1 asflicense 0m 30s Patch does not generate ASF License warnings. 148m 27s Reason Tests JDK v1.8.0_60 Failed junit tests hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels   hadoop.yarn.client.TestGetGroups JDK v1.8.0_60 Timed out junit tests org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell   org.apache.hadoop.yarn.client.api.impl.TestYarnClient   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient   org.apache.hadoop.yarn.client.api.impl.TestNMClient JDK v1.7.0_79 Failed junit tests hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels   hadoop.yarn.client.TestGetGroups JDK v1.7.0_79 Timed out junit tests org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell   org.apache.hadoop.yarn.client.api.impl.TestYarnClient   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient   org.apache.hadoop.yarn.client.api.impl.TestNMClient Subsystem Report/Notes Docker Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-11-03 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12770343/YARN-1509.10.patch JIRA Issue YARN-1509 Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile uname Linux f89fbd4dff15 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/apache-yetus-1a9afee/precommit/personality/hadoop.sh git revision trunk / 957f031 Default Java 1.7.0_79 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_60 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 findbugs v3.0.0 mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell-jdk1.8.0_60.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.8.0_60.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell-jdk1.7.0_79.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell-jdk1.7.0_79.txt https://builds.apache.org/job/PreCommit-YARN-Build/9620/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9620/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: hadoop-yarn-project/hadoop-yarn Max memory used 226MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9620/console This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 10s docker + precommit patch detected.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 3m 23s trunk passed
          +1 compile 1m 3s trunk passed with JDK v1.8.0_60
          +1 compile 0m 54s trunk passed with JDK v1.7.0_79
          +1 checkstyle 0m 30s trunk passed
          +1 mvneclipse 0m 27s trunk passed
          +1 findbugs 1m 7s trunk passed
          +1 javadoc 0m 31s trunk passed with JDK v1.8.0_60
          +1 javadoc 0m 33s trunk passed with JDK v1.7.0_79
          -1 mvninstall 0m 13s hadoop-yarn-applications-distributedshell in the patch failed.
          +1 compile 0m 59s the patch passed with JDK v1.8.0_60
          +1 javac 0m 59s the patch passed
          +1 compile 0m 53s the patch passed with JDK v1.7.0_79
          +1 javac 0m 53s the patch passed
          -1 checkstyle 0m 31s Patch generated 3 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 124, now 120).
          +1 mvneclipse 0m 27s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 26s the patch passed
          +1 javadoc 0m 30s the patch passed with JDK v1.8.0_60
          +1 javadoc 0m 34s the patch passed with JDK v1.7.0_79
          +1 unit 7m 3s hadoop-yarn-applications-distributedshell in the patch passed with JDK v1.8.0_60.
          -1 unit 49m 35s hadoop-yarn-client in the patch failed with JDK v1.8.0_60.
          +1 unit 7m 2s hadoop-yarn-applications-distributedshell in the patch passed with JDK v1.7.0_79.
          -1 unit 49m 37s hadoop-yarn-client in the patch failed with JDK v1.7.0_79.
          +1 asflicense 0m 26s Patch does not generate ASF License warnings.
          129m 35s



          Reason Tests
          JDK v1.8.0_60 Failed junit tests hadoop.yarn.client.TestGetGroups
          JDK v1.8.0_60 Timed out junit tests org.apache.hadoop.yarn.client.api.impl.TestYarnClient
            org.apache.hadoop.yarn.client.api.impl.TestAMRMClient
            org.apache.hadoop.yarn.client.api.impl.TestNMClient
          JDK v1.7.0_79 Failed junit tests hadoop.yarn.client.TestGetGroups
          JDK v1.7.0_79 Timed out junit tests org.apache.hadoop.yarn.client.api.impl.TestYarnClient
            org.apache.hadoop.yarn.client.api.impl.TestAMRMClient
            org.apache.hadoop.yarn.client.api.impl.TestNMClient



          Subsystem Report/Notes
          Docker Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-11-10
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12770343/YARN-1509.10.patch
          JIRA Issue YARN-1509
          Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile
          uname Linux a4fa58c2aa5c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/apache-yetus-ee5baeb/precommit/personality/hadoop.sh
          git revision trunk / 493e8ae
          Default Java 1.7.0_79
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_60 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79
          findbugs v3.0.0
          mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/9651/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9651/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9651/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.8.0_60.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9651/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9651/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-YARN-Build/9651/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt
          JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9651/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: hadoop-yarn-project/hadoop-yarn
          Max memory used 228MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9651/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 10s docker + precommit patch detected. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 3m 23s trunk passed +1 compile 1m 3s trunk passed with JDK v1.8.0_60 +1 compile 0m 54s trunk passed with JDK v1.7.0_79 +1 checkstyle 0m 30s trunk passed +1 mvneclipse 0m 27s trunk passed +1 findbugs 1m 7s trunk passed +1 javadoc 0m 31s trunk passed with JDK v1.8.0_60 +1 javadoc 0m 33s trunk passed with JDK v1.7.0_79 -1 mvninstall 0m 13s hadoop-yarn-applications-distributedshell in the patch failed. +1 compile 0m 59s the patch passed with JDK v1.8.0_60 +1 javac 0m 59s the patch passed +1 compile 0m 53s the patch passed with JDK v1.7.0_79 +1 javac 0m 53s the patch passed -1 checkstyle 0m 31s Patch generated 3 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 124, now 120). +1 mvneclipse 0m 27s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 26s the patch passed +1 javadoc 0m 30s the patch passed with JDK v1.8.0_60 +1 javadoc 0m 34s the patch passed with JDK v1.7.0_79 +1 unit 7m 3s hadoop-yarn-applications-distributedshell in the patch passed with JDK v1.8.0_60. -1 unit 49m 35s hadoop-yarn-client in the patch failed with JDK v1.8.0_60. +1 unit 7m 2s hadoop-yarn-applications-distributedshell in the patch passed with JDK v1.7.0_79. -1 unit 49m 37s hadoop-yarn-client in the patch failed with JDK v1.7.0_79. +1 asflicense 0m 26s Patch does not generate ASF License warnings. 129m 35s Reason Tests JDK v1.8.0_60 Failed junit tests hadoop.yarn.client.TestGetGroups JDK v1.8.0_60 Timed out junit tests org.apache.hadoop.yarn.client.api.impl.TestYarnClient   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient   org.apache.hadoop.yarn.client.api.impl.TestNMClient JDK v1.7.0_79 Failed junit tests hadoop.yarn.client.TestGetGroups JDK v1.7.0_79 Timed out junit tests org.apache.hadoop.yarn.client.api.impl.TestYarnClient   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient   org.apache.hadoop.yarn.client.api.impl.TestNMClient Subsystem Report/Notes Docker Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-11-10 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12770343/YARN-1509.10.patch JIRA Issue YARN-1509 Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile uname Linux a4fa58c2aa5c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/apache-yetus-ee5baeb/precommit/personality/hadoop.sh git revision trunk / 493e8ae Default Java 1.7.0_79 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_60 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 findbugs v3.0.0 mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/9651/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9651/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9651/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.8.0_60.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9651/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9651/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-YARN-Build/9651/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9651/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: hadoop-yarn-project/hadoop-yarn Max memory used 228MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9651/console This message was automatically generated.
          Hide
          jianhe Jian He added a comment -

          lgtm, thanks Meng !

          Show
          jianhe Jian He added a comment - lgtm, thanks Meng !
          Hide
          leftnoteasy Wangda Tan added a comment -

          +1 to latest patch, will commit it tomorrow if there's no opposite opinions.

          Show
          leftnoteasy Wangda Tan added a comment - +1 to latest patch, will commit it tomorrow if there's no opposite opinions.
          Hide
          leftnoteasy Wangda Tan added a comment -

          Committed to trunk/branch-2, thanks MENG DING and review from Bikas Saha/Jian He!

          Show
          leftnoteasy Wangda Tan added a comment - Committed to trunk/branch-2, thanks MENG DING and review from Bikas Saha / Jian He !
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #8800 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8800/)
          YARN-1509. Make AMRMClient support send increase container request and (wangda: rev 7ff280fca9af45b98cee2336e78803da46b0f8a5)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientOnRMRestart.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java
          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #8800 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8800/ ) YARN-1509 . Make AMRMClient support send increase container request and (wangda: rev 7ff280fca9af45b98cee2336e78803da46b0f8a5) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientOnRMRestart.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java hadoop-yarn-project/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #1399 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/1399/)
          YARN-1509. Make AMRMClient support send increase container request and (wangda: rev 7ff280fca9af45b98cee2336e78803da46b0f8a5)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientOnRMRestart.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #1399 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/1399/ ) YARN-1509 . Make AMRMClient support send increase container request and (wangda: rev 7ff280fca9af45b98cee2336e78803da46b0f8a5) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientOnRMRestart.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #675 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/675/)
          YARN-1509. Make AMRMClient support send increase container request and (wangda: rev 7ff280fca9af45b98cee2336e78803da46b0f8a5)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientOnRMRestart.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #675 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/675/ ) YARN-1509 . Make AMRMClient support send increase container request and (wangda: rev 7ff280fca9af45b98cee2336e78803da46b0f8a5) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientOnRMRestart.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #663 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/663/)
          YARN-1509. Make AMRMClient support send increase container request and (wangda: rev 7ff280fca9af45b98cee2336e78803da46b0f8a5)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientOnRMRestart.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #663 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/663/ ) YARN-1509 . Make AMRMClient support send increase container request and (wangda: rev 7ff280fca9af45b98cee2336e78803da46b0f8a5) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientOnRMRestart.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #2539 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2539/)
          YARN-1509. Make AMRMClient support send increase container request and (wangda: rev 7ff280fca9af45b98cee2336e78803da46b0f8a5)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientOnRMRestart.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2539 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2539/ ) YARN-1509 . Make AMRMClient support send increase container request and (wangda: rev 7ff280fca9af45b98cee2336e78803da46b0f8a5) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientOnRMRestart.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java hadoop-yarn-project/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #2604 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2604/)
          YARN-1509. Make AMRMClient support send increase container request and (wangda: rev 7ff280fca9af45b98cee2336e78803da46b0f8a5)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientOnRMRestart.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2604 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2604/ ) YARN-1509 . Make AMRMClient support send increase container request and (wangda: rev 7ff280fca9af45b98cee2336e78803da46b0f8a5) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientOnRMRestart.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #602 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/602/)
          YARN-1509. Make AMRMClient support send increase container request and (wangda: rev 7ff280fca9af45b98cee2336e78803da46b0f8a5)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientOnRMRestart.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #602 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/602/ ) YARN-1509 . Make AMRMClient support send increase container request and (wangda: rev 7ff280fca9af45b98cee2336e78803da46b0f8a5) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientOnRMRestart.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java

            People

            • Assignee:
              mding MENG DING
              Reporter:
              gp.leftnoteasy Wangda Tan (No longer used)
            • Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development