Details

    • Type: Sub-task
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.9.0, 3.0.0-beta1
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      In order to support transparent "spanning" of jobs across sub-clusters, all AM-RM communications are proxied (via YARN-2884).

      This JIRA tracks federation-specific mechanisms that decide how to "split/broadcast" requests to the RMs and "merge" answers to
      the AM.

      This the part two jira, which adds secondary subclusters and do full split-merge for requests. Part one is in YARN-3666

      1. YARN-6511-YARN-2915.v1.patch
        66 kB
        Botong Huang
      2. YARN-6511-YARN-2915.v2.patch
        66 kB
        Botong Huang
      3. YARN-6511-YARN-2915.v3.patch
        60 kB
        Botong Huang

        Issue Links

          Activity

          Hide
          botong Botong Huang added a comment -

          Great! Thanks Subru Krishnan and Jian He for the review and quick response!

          Show
          botong Botong Huang added a comment - Great! Thanks Subru Krishnan and Jian He for the review and quick response!
          Hide
          subru Subru Krishnan added a comment -

          Thanks Botong Huang for your contribution & Jian He for the review! I just committed this to branch YARN-2915.

          Show
          subru Subru Krishnan added a comment - Thanks Botong Huang for your contribution & Jian He for the review! I just committed this to branch YARN-2915 .
          Hide
          jianhe Jian He added a comment -

          lgtm, thanks

          Show
          jianhe Jian He added a comment - lgtm, thanks
          Hide
          subru Subru Krishnan added a comment -

          Thanks Botong Huang for addressing my comments, the latest patch (v3) LGTM. I'll wait for a day to see if Jian He has any further questions/concerns before committing.

          Show
          subru Subru Krishnan added a comment - Thanks Botong Huang for addressing my comments, the latest patch (v3) LGTM. I'll wait for a day to see if Jian He has any further questions/concerns before committing.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 19s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          0 mvndep 0m 49s Maven dependency ordering for branch
          +1 mvninstall 17m 4s YARN-2915 passed
          +1 compile 2m 7s YARN-2915 passed
          +1 checkstyle 0m 39s YARN-2915 passed
          +1 mvnsite 0m 55s YARN-2915 passed
          -1 findbugs 0m 44s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in YARN-2915 has 5 extant Findbugs warnings.
          +1 javadoc 0m 36s YARN-2915 passed
          0 mvndep 0m 9s Maven dependency ordering for patch
          +1 mvninstall 0m 49s the patch passed
          +1 compile 1m 44s the patch passed
          +1 javac 1m 44s the patch passed
          +1 checkstyle 0m 36s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)
          +1 mvnsite 0m 50s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 2m 6s the patch passed
          +1 javadoc 0m 35s the patch passed
          +1 unit 1m 22s hadoop-yarn-server-common in the patch passed.
          +1 unit 13m 12s hadoop-yarn-server-nodemanager in the patch passed.
          +1 asflicense 0m 20s The patch does not generate ASF License warnings.
          51m 21s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue YARN-6511
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12871646/YARN-6511-YARN-2915.v3.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 3f16987558ad 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision YARN-2915 / a573bec
          Default Java 1.8.0_131
          findbugs v3.1.0-RC1
          findbugs https://builds.apache.org/job/PreCommit-YARN-Build/16129/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16129/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/16129/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 19s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. 0 mvndep 0m 49s Maven dependency ordering for branch +1 mvninstall 17m 4s YARN-2915 passed +1 compile 2m 7s YARN-2915 passed +1 checkstyle 0m 39s YARN-2915 passed +1 mvnsite 0m 55s YARN-2915 passed -1 findbugs 0m 44s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in YARN-2915 has 5 extant Findbugs warnings. +1 javadoc 0m 36s YARN-2915 passed 0 mvndep 0m 9s Maven dependency ordering for patch +1 mvninstall 0m 49s the patch passed +1 compile 1m 44s the patch passed +1 javac 1m 44s the patch passed +1 checkstyle 0m 36s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) +1 mvnsite 0m 50s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 6s the patch passed +1 javadoc 0m 35s the patch passed +1 unit 1m 22s hadoop-yarn-server-common in the patch passed. +1 unit 13m 12s hadoop-yarn-server-nodemanager in the patch passed. +1 asflicense 0m 20s The patch does not generate ASF License warnings. 51m 21s Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue YARN-6511 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12871646/YARN-6511-YARN-2915.v3.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 3f16987558ad 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision YARN-2915 / a573bec Default Java 1.8.0_131 findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-YARN-Build/16129/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16129/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server Console output https://builds.apache.org/job/PreCommit-YARN-Build/16129/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          botong Botong Huang added a comment -

          Thanks Subru Krishnan for the review! I've addressed most comments in v3 patch (as well as ones from Jian He). For the rest, please see below:

          Do we need a UnmanagedAMPoolManager per interceptor instance or can we use one at AMRMProxyService level?

          It is easier the current way because we need to constantly get all UAM associated with one application (keyed by subClusterId).
          If we do one pool per AMRMProxy, then we probably need to key UAM with appId+subclusterId. The search for UAMs associated with one application will not be straight forward.

          Is updating the queue below safe in loadAMRMPolicy

          Yes, the variable queue is a local string, used by only the policy manager.

          I feel the finishApplicationMaster of the pool should be moved to UnmanagedAMPoolManager.

          Yes we can choose to. However it will likely be a blocking call then, where we loose the freedom to schedule the tasks, synchronously call finish in home, and then wait for the secondaries to come back. Or, we need addition interface in UAMPoolManager, one for schedule and one for fetch result. I've added a TODO for this.

          I see dynamic instantiations of ExecutorCompletionService in finish, register, etc invocations. Wouldn't we be better served by pre-initializing it?

          We need to create them locally because of concurrency. The allocate and finish calls can be invoked concurrently. Sharing the same completion service object will confuse the tasks submitted from both sides.

          Is getSubClusterForNode required as the resolver should be doing this instead of every client?

          AbstractSubClusterResolver.getSubClusterForNode throws when resolving an unknown node, we don't want to throw in this case, and thus need to catch and log the warning.

          Move YarnConfiguration outside the for loop in registerWithNewSubClusters

          We cannot because we need a different config per UAM, loaded with the sub-cluster id

          Consider looping on registrations in lieu of requests in sendRequestsToSecondaryResourceManagers

          Registration only contains the newly added secondary sub-clusters, while we need to loop over (send heartbeat to) all known secondaries here.

          Show
          botong Botong Huang added a comment - Thanks Subru Krishnan for the review! I've addressed most comments in v3 patch (as well as ones from Jian He ). For the rest, please see below: Do we need a UnmanagedAMPoolManager per interceptor instance or can we use one at AMRMProxyService level? It is easier the current way because we need to constantly get all UAM associated with one application (keyed by subClusterId). If we do one pool per AMRMProxy, then we probably need to key UAM with appId+subclusterId. The search for UAMs associated with one application will not be straight forward. Is updating the queue below safe in loadAMRMPolicy Yes, the variable queue is a local string, used by only the policy manager. I feel the finishApplicationMaster of the pool should be moved to UnmanagedAMPoolManager . Yes we can choose to. However it will likely be a blocking call then, where we loose the freedom to schedule the tasks, synchronously call finish in home, and then wait for the secondaries to come back. Or, we need addition interface in UAMPoolManager, one for schedule and one for fetch result. I've added a TODO for this. I see dynamic instantiations of ExecutorCompletionService in finish, register, etc invocations. Wouldn't we be better served by pre-initializing it? We need to create them locally because of concurrency. The allocate and finish calls can be invoked concurrently. Sharing the same completion service object will confuse the tasks submitted from both sides. Is getSubClusterForNode required as the resolver should be doing this instead of every client? AbstractSubClusterResolver.getSubClusterForNode throws when resolving an unknown node, we don't want to throw in this case, and thus need to catch and log the warning. Move YarnConfiguration outside the for loop in registerWithNewSubClusters We cannot because we need a different config per UAM, loaded with the sub-cluster id Consider looping on registrations in lieu of requests in sendRequestsToSecondaryResourceManagers Registration only contains the newly added secondary sub-clusters, while we need to loop over (send heartbeat to) all known secondaries here.
          Hide
          jianhe Jian He added a comment -

          i see, thanks for the explanation

          Show
          jianhe Jian He added a comment - i see, thanks for the explanation
          Hide
          botong Botong Huang added a comment - - edited

          Hi Jian He, thanks for the review! We only register UAM for the first time a sub-cluster is asked for. Please see this.uamPool.hasUAMId(subClusterId.getId()) in registerWithNewSubClusters. We will handle failed-to-register sub-clusters in the next iteration, where we will likely retry and consult the policy to fallback. I will fix the other minor issue in the next version. Thanks!

          Show
          botong Botong Huang added a comment - - edited Hi Jian He , thanks for the review! We only register UAM for the first time a sub-cluster is asked for. Please see this.uamPool.hasUAMId(subClusterId.getId()) in registerWithNewSubClusters . We will handle failed-to-register sub-clusters in the next iteration, where we will likely retry and consult the policy to fallback. I will fix the other minor issue in the next version. Thanks!
          Hide
          jianhe Jian He added a comment -

          thanks Botong Huang, not sure I'm reading it right:
          Looks like for each allocate call, it always does registration for all the sub-clusters by spinning off #clusters threads and register, will this be a bottleneck as the allocate is a fairly frequent call. Also, what happens to the failed-to-register cluster? will it continuously register for each allocate ?
          nit: Unused param 'appContext' in splitResourceRequests

          Show
          jianhe Jian He added a comment - thanks Botong Huang , not sure I'm reading it right: Looks like for each allocate call, it always does registration for all the sub-clusters by spinning off #clusters threads and register, will this be a bottleneck as the allocate is a fairly frequent call. Also, what happens to the failed-to-register cluster? will it continuously register for each allocate ? nit: Unused param 'appContext' in splitResourceRequests
          Hide
          subru Subru Krishnan added a comment -

          Thanks Botong Huang for the patch. I looked at it & please find my comments below:

          • I don't see the need for TestableAMRMProxyPolicy, we should simply reuse BroadcastAMRMProxyPolicy. If possible, do the same for TestableRouterPolicy.
          • Do we need a UnmanagedAMPoolManager per interceptor instance or can we use one at AMRMProxyService level?
          • loadAMRMPolicy is independent of FederationInterceptor, can be refactored out to common class.
          • Is updating the queue below safe in loadAMRMPolicy?
             queue = YarnConfiguration.DEFAULT_FEDERATION_POLICY_KEY;
          • I feel the finishApplicationMaster of the pool should be moved to UnmanagedAMPoolManager.
          • I see dynamic instantiations of ExecutorCompletionService in finish, register, etc invocations. Wouldn't we be better served by pre-initializing it?
          • Is getSubClusterForNode required as the resolver should be doing this instead of every client?
          • Consider looping on registrations in lieu of requests in sendRequestsToSecondaryResourceManagers as that'll not only minimize the iterations but also make the checks redundant.
          • Quite a few ops like creation of YarnConfiguration, getApplicationContext invocation etc can be moved outside the for loop in registerWithNewSubClusters.
          • YarnConfiguration.getClusterId(getConf()) is unnecessary as we have homeSubClusterId.
          Show
          subru Subru Krishnan added a comment - Thanks Botong Huang for the patch. I looked at it & please find my comments below: I don't see the need for TestableAMRMProxyPolicy , we should simply reuse BroadcastAMRMProxyPolicy . If possible, do the same for TestableRouterPolicy . Do we need a UnmanagedAMPoolManager per interceptor instance or can we use one at AMRMProxyService level? loadAMRMPolicy is independent of FederationInterceptor , can be refactored out to common class. Is updating the queue below safe in loadAMRMPolicy ? queue = YarnConfiguration.DEFAULT_FEDERATION_POLICY_KEY; I feel the finishApplicationMaster of the pool should be moved to UnmanagedAMPoolManager . I see dynamic instantiations of ExecutorCompletionService in finish, register, etc invocations. Wouldn't we be better served by pre-initializing it? Is getSubClusterForNode required as the resolver should be doing this instead of every client? Consider looping on registrations in lieu of requests in sendRequestsToSecondaryResourceManagers as that'll not only minimize the iterations but also make the checks redundant. Quite a few ops like creation of YarnConfiguration , getApplicationContext invocation etc can be moved outside the for loop in registerWithNewSubClusters . YarnConfiguration.getClusterId(getConf()) is unnecessary as we have homeSubClusterId .
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 23s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 5 new or modified test files.
          0 mvndep 0m 36s Maven dependency ordering for branch
          +1 mvninstall 14m 23s YARN-2915 passed
          +1 compile 1m 50s YARN-2915 passed
          +1 checkstyle 0m 34s YARN-2915 passed
          +1 mvnsite 0m 54s YARN-2915 passed
          +1 mvneclipse 0m 29s YARN-2915 passed
          -1 findbugs 0m 40s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in YARN-2915 has 5 extant Findbugs warnings.
          +1 javadoc 0m 33s YARN-2915 passed
          0 mvndep 0m 9s Maven dependency ordering for patch
          +1 mvninstall 0m 44s the patch passed
          +1 compile 1m 42s the patch passed
          +1 javac 1m 42s the patch passed
          -0 checkstyle 0m 32s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 16 new + 1 unchanged - 0 fixed = 17 total (was 1)
          +1 mvnsite 0m 47s the patch passed
          +1 mvneclipse 0m 25s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          -1 findbugs 0m 58s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager generated 4 new + 5 unchanged - 0 fixed = 9 total (was 5)
          -1 javadoc 0m 16s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common generated 2 new + 162 unchanged - 0 fixed = 164 total (was 162)
          -1 javadoc 0m 15s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 6 new + 231 unchanged - 0 fixed = 237 total (was 231)
          +1 unit 1m 19s hadoop-yarn-server-common in the patch passed.
          +1 unit 13m 29s hadoop-yarn-server-nodemanager in the patch passed.
          +1 asflicense 0m 22s The patch does not generate ASF License warnings.
          49m 6s



          Reason Tests
          FindBugs module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
            org.apache.hadoop.yarn.server.nodemanager.amrmproxy.FederationInterceptor.mergeAllocateResponses(AllocateResponse) makes inefficient use of keySet iterator instead of entrySet iterator At FederationInterceptor.java:keySet iterator instead of entrySet iterator At FederationInterceptor.java:[line 875]
            org.apache.hadoop.yarn.server.nodemanager.amrmproxy.FederationInterceptor.mergeRegistrationResponses(AllocateResponse, Map) makes inefficient use of keySet iterator instead of entrySet iterator At FederationInterceptor.java:of keySet iterator instead of entrySet iterator At FederationInterceptor.java:[line 916]
            org.apache.hadoop.yarn.server.nodemanager.amrmproxy.FederationInterceptor.sendRequestsToSecondaryResourceManagers(Map) makes inefficient use of keySet iterator instead of entrySet iterator At FederationInterceptor.java:keySet iterator instead of entrySet iterator At FederationInterceptor.java:[line 726]
            org.apache.hadoop.yarn.server.nodemanager.amrmproxy.FederationInterceptor.splitAllocateRequest(AllocateRequest) makes inefficient use of keySet iterator instead of entrySet iterator At FederationInterceptor.java:keySet iterator instead of entrySet iterator At FederationInterceptor.java:[line 627]



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue YARN-6511
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12870679/YARN-6511-YARN-2915.v1.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux dc3e6ec55113 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision YARN-2915 / a573bec
          Default Java 1.8.0_131
          findbugs v3.1.0-RC1
          findbugs https://builds.apache.org/job/PreCommit-YARN-Build/16060/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16060/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt
          findbugs https://builds.apache.org/job/PreCommit-YARN-Build/16060/artifact/patchprocess/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.html
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/16060/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/16060/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16060/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/16060/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 23s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 5 new or modified test files. 0 mvndep 0m 36s Maven dependency ordering for branch +1 mvninstall 14m 23s YARN-2915 passed +1 compile 1m 50s YARN-2915 passed +1 checkstyle 0m 34s YARN-2915 passed +1 mvnsite 0m 54s YARN-2915 passed +1 mvneclipse 0m 29s YARN-2915 passed -1 findbugs 0m 40s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in YARN-2915 has 5 extant Findbugs warnings. +1 javadoc 0m 33s YARN-2915 passed 0 mvndep 0m 9s Maven dependency ordering for patch +1 mvninstall 0m 44s the patch passed +1 compile 1m 42s the patch passed +1 javac 1m 42s the patch passed -0 checkstyle 0m 32s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 16 new + 1 unchanged - 0 fixed = 17 total (was 1) +1 mvnsite 0m 47s the patch passed +1 mvneclipse 0m 25s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. -1 findbugs 0m 58s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager generated 4 new + 5 unchanged - 0 fixed = 9 total (was 5) -1 javadoc 0m 16s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common generated 2 new + 162 unchanged - 0 fixed = 164 total (was 162) -1 javadoc 0m 15s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 6 new + 231 unchanged - 0 fixed = 237 total (was 231) +1 unit 1m 19s hadoop-yarn-server-common in the patch passed. +1 unit 13m 29s hadoop-yarn-server-nodemanager in the patch passed. +1 asflicense 0m 22s The patch does not generate ASF License warnings. 49m 6s Reason Tests FindBugs module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager   org.apache.hadoop.yarn.server.nodemanager.amrmproxy.FederationInterceptor.mergeAllocateResponses(AllocateResponse) makes inefficient use of keySet iterator instead of entrySet iterator At FederationInterceptor.java:keySet iterator instead of entrySet iterator At FederationInterceptor.java: [line 875]   org.apache.hadoop.yarn.server.nodemanager.amrmproxy.FederationInterceptor.mergeRegistrationResponses(AllocateResponse, Map) makes inefficient use of keySet iterator instead of entrySet iterator At FederationInterceptor.java:of keySet iterator instead of entrySet iterator At FederationInterceptor.java: [line 916]   org.apache.hadoop.yarn.server.nodemanager.amrmproxy.FederationInterceptor.sendRequestsToSecondaryResourceManagers(Map) makes inefficient use of keySet iterator instead of entrySet iterator At FederationInterceptor.java:keySet iterator instead of entrySet iterator At FederationInterceptor.java: [line 726]   org.apache.hadoop.yarn.server.nodemanager.amrmproxy.FederationInterceptor.splitAllocateRequest(AllocateRequest) makes inefficient use of keySet iterator instead of entrySet iterator At FederationInterceptor.java:keySet iterator instead of entrySet iterator At FederationInterceptor.java: [line 627] Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue YARN-6511 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12870679/YARN-6511-YARN-2915.v1.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux dc3e6ec55113 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision YARN-2915 / a573bec Default Java 1.8.0_131 findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-YARN-Build/16060/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16060/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt findbugs https://builds.apache.org/job/PreCommit-YARN-Build/16060/artifact/patchprocess/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.html javadoc https://builds.apache.org/job/PreCommit-YARN-Build/16060/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/16060/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16060/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server Console output https://builds.apache.org/job/PreCommit-YARN-Build/16060/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.

            People

            • Assignee:
              botong Botong Huang
              Reporter:
              botong Botong Huang
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development