Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-4438

Implement RM leader election with curator

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: None
    • Labels:
      None
    • Target Version/s:

      Description

      This is to implement the leader election with curator instead of the ActiveStandbyElector from common package, this also avoids adding more configs in common to suit RM's own needs.

      1. YARN-4438.1.patch
        25 kB
        Jian He
      2. YARN-4438.2.patch
        28 kB
        Jian He
      3. YARN-4438.3.patch
        29 kB
        Jian He
      4. YARN-4438.4.patch
        28 kB
        Jian He
      5. YARN-4438.5.patch
        29 kB
        Jian He
      6. YARN-4438.6.patch
        29 kB
        Jian He

        Activity

        Hide
        jianhe Jian He added a comment -

        A flag is now introduced to enable curator based leader election, eventually I'd like to remove the embeddedLeaderElector and keep the curator one only

        Show
        jianhe Jian He added a comment - A flag is now introduced to enable curator based leader election, eventually I'd like to remove the embeddedLeaderElector and keep the curator one only
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 9m 14s trunk passed
        +1 compile 2m 35s trunk passed with JDK v1.8.0_66
        +1 compile 2m 34s trunk passed with JDK v1.7.0_91
        +1 checkstyle 0m 32s trunk passed
        +1 mvnsite 1m 16s trunk passed
        +1 mvneclipse 0m 31s trunk passed
        +1 findbugs 3m 1s trunk passed
        -1 javadoc 0m 26s hadoop-yarn-server-resourcemanager in trunk failed with JDK v1.8.0_66.
        +1 javadoc 3m 38s trunk passed with JDK v1.7.0_91
        +1 mvninstall 1m 9s the patch passed
        +1 compile 2m 30s the patch passed with JDK v1.8.0_66
        +1 javac 2m 30s the patch passed
        +1 compile 2m 29s the patch passed with JDK v1.7.0_91
        +1 javac 2m 29s the patch passed
        -1 checkstyle 0m 32s Patch generated 19 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 365, now 383).
        +1 mvnsite 1m 13s the patch passed
        +1 mvneclipse 0m 30s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        +1 findbugs 3m 21s the patch passed
        -1 javadoc 0m 25s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
        +1 javadoc 3m 24s the patch passed with JDK v1.7.0_91
        -1 unit 0m 25s hadoop-yarn-api in the patch failed with JDK v1.8.0_66.
        -1 unit 61m 34s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
        -1 unit 0m 30s hadoop-yarn-api in the patch failed with JDK v1.7.0_91.
        -1 unit 62m 4s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
        -1 asflicense 0m 20s Patch generated 2 ASF License warnings.
        167m 2s



        Reason Tests
        JDK v1.8.0_66 Failed junit tests hadoop.yarn.conf.TestYarnConfigurationFields
          hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.TestAMAuthorization
        JDK v1.7.0_91 Failed junit tests hadoop.yarn.conf.TestYarnConfigurationFields
          hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.TestAMAuthorization



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:0ca8df7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12776659/YARN-4438.1.patch
        JIRA Issue YARN-4438
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 09d7b618cad9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / e27fffd
        findbugs v3.0.0
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.7.0_91.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.7.0_91.txt https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9917/testReport/
        asflicense https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-asflicense-problems.txt
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn
        Max memory used 76MB
        Powered by Apache Yetus http://yetus.apache.org
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/9917/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 9m 14s trunk passed +1 compile 2m 35s trunk passed with JDK v1.8.0_66 +1 compile 2m 34s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 32s trunk passed +1 mvnsite 1m 16s trunk passed +1 mvneclipse 0m 31s trunk passed +1 findbugs 3m 1s trunk passed -1 javadoc 0m 26s hadoop-yarn-server-resourcemanager in trunk failed with JDK v1.8.0_66. +1 javadoc 3m 38s trunk passed with JDK v1.7.0_91 +1 mvninstall 1m 9s the patch passed +1 compile 2m 30s the patch passed with JDK v1.8.0_66 +1 javac 2m 30s the patch passed +1 compile 2m 29s the patch passed with JDK v1.7.0_91 +1 javac 2m 29s the patch passed -1 checkstyle 0m 32s Patch generated 19 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 365, now 383). +1 mvnsite 1m 13s the patch passed +1 mvneclipse 0m 30s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 3m 21s the patch passed -1 javadoc 0m 25s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. +1 javadoc 3m 24s the patch passed with JDK v1.7.0_91 -1 unit 0m 25s hadoop-yarn-api in the patch failed with JDK v1.8.0_66. -1 unit 61m 34s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 0m 30s hadoop-yarn-api in the patch failed with JDK v1.7.0_91. -1 unit 62m 4s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. -1 asflicense 0m 20s Patch generated 2 ASF License warnings. 167m 2s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.conf.TestYarnConfigurationFields   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_91 Failed junit tests hadoop.yarn.conf.TestYarnConfigurationFields   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12776659/YARN-4438.1.patch JIRA Issue YARN-4438 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 09d7b618cad9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / e27fffd findbugs v3.0.0 javadoc https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.7.0_91.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.7.0_91.txt https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9917/testReport/ asflicense https://builds.apache.org/job/PreCommit-YARN-Build/9917/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn Max memory used 76MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9917/console This message was automatically generated.
        Hide
        ozawa Tsuyoshi Ozawa added a comment -

        Jian He Thank you for taking the issue. +1 for the design. Could you check following comments?

        1. In the code path of RMWebApp.getHAZookeeperConnectionState, embeddedElector, which is null since AdminService.embeddedElector is uninitialized, can be accessed directly. Can we use rmContext.getLeaderElectorService() instead?

        AdminService.java
        public class AdminService extends CompositeService implements
            HAServiceProtocol, ResourceManagerAdministrationProtocol {
          public String getHAZookeeperConnectionState() {
            if (!rmContext.isHAEnabled()) {
              return "ResourceManager HA is not enabled.";
            } else if (!autoFailoverEnabled) {
              return "Auto Failover is not enabled.";
            }
            return this.embeddedElector.getHAZookeeperConnectionState();
          }
        }
        
        RMWebApp.java
        public class RMWebApp extends WebApp implements YarnWebParams {
          ...
          public String getHAZookeeperConnectionState() {
            return rm.getRMContext().getRMAdminService()
              .getHAZookeeperConnectionState();
          }
        }
        

        2. In TestLeaderElectorService.testKillZKInstance, should we check (rm1 is active and rm2 is standby) or (rm1 is standby and rm2 is active) explicitly to detect split brain problem?

        TestLeaderElectorService.java
            GenericTestUtils.waitFor(new Supplier<Boolean>() {
              @Override public Boolean get() {
                try {
                  return rm1.getAdminService().getServiceStatus().getState()
                      .equals(HAServiceState.ACTIVE) || rm2.getAdminService()
                      .getServiceStatus().getState().equals(HAServiceState.ACTIVE);
                } catch (IOException e) {
                }
                return false;
              }
        

        can be:

                  return (rm1.getAdminService().getServiceStatus().getState().equals(HAServiceState.ACTIVE)
                      && rm2.getAdminService().getServiceStatus().getState().equals(HAServiceState.STANDBY))
                      || (rm1.getAdminService().getServiceStatus().getState()
                      .equals(HAServiceState.STANDBY) && rm2.getAdminService()
                      .getServiceStatus().getState().equals(HAServiceState.ACTIVE));
        

        Following comments are minor nits:

        • LeaderElectorService and TestLeaderElectorService includes lines which exceed 80 characters.
        • Should we name a variable, TestingCluster cluster in TestLeaderElectorService, zkCluster to make it clear?
        • Found a typo:
            public void testRMFailToTransitionToActive() throws Exception{
              ...
              Thread laucnRM = new Thread() {
              ...
            }
          
        • We can remove unused imports in LeaderElectorService, TestLeaderElectorService, ResourceManager.
        LeaderElectorService.java
        ...
        import org.apache.hadoop.fs.CommonConfigurationKeys;
        ...
        import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
        
        TestLeaderElectorService.java
        ...
        import static org.mockito.Mockito.spy;
        ...
        
        ResourceManager.java
        ...
        import org.apache.curator.framework.recipes.leader.LeaderLatchListener;
        ...
        
        • This is just a question - why did you change an argument of RMAuditLogger.logFailure, target, from RMHAProtocolService to RM?
        AdminService.java
        @@ -319,7 +323,7 @@ public synchronized void transitionToActive(
               rm.transitionToActive();
             } catch (Exception e) {
               RMAuditLogger.logFailure(user.getShortUserName(), "transitionToActive",
        -          "", "RMHAProtocolService",
        +          "", "RM",
                   "Exception transitioning to active");
               throw new ServiceFailedException(
                   "Error when transitioning to Active mode", e);
        @@ -338,7 +342,7 @@ public synchronized void transitionToActive(
                   "Error on refreshAll during transistion to Active", e);
             }
             RMAuditLogger.logSuccess(user.getShortUserName(), "transitionToActive",
        -        "RMHAProtocolService");
        +        "RM");
           }
        
        Show
        ozawa Tsuyoshi Ozawa added a comment - Jian He Thank you for taking the issue. +1 for the design. Could you check following comments? 1. In the code path of RMWebApp.getHAZookeeperConnectionState , embeddedElector , which is null since AdminService.embeddedElector is uninitialized, can be accessed directly. Can we use rmContext.getLeaderElectorService() instead? AdminService.java public class AdminService extends CompositeService implements HAServiceProtocol, ResourceManagerAdministrationProtocol { public String getHAZookeeperConnectionState() { if (!rmContext.isHAEnabled()) { return "ResourceManager HA is not enabled." ; } else if (!autoFailoverEnabled) { return "Auto Failover is not enabled." ; } return this .embeddedElector.getHAZookeeperConnectionState(); } } RMWebApp.java public class RMWebApp extends WebApp implements YarnWebParams { ... public String getHAZookeeperConnectionState() { return rm.getRMContext().getRMAdminService() .getHAZookeeperConnectionState(); } } 2. In TestLeaderElectorService.testKillZKInstance , should we check (rm1 is active and rm2 is standby) or (rm1 is standby and rm2 is active) explicitly to detect split brain problem? TestLeaderElectorService.java GenericTestUtils.waitFor( new Supplier< Boolean >() { @Override public Boolean get() { try { return rm1.getAdminService().getServiceStatus().getState() .equals(HAServiceState.ACTIVE) || rm2.getAdminService() .getServiceStatus().getState().equals(HAServiceState.ACTIVE); } catch (IOException e) { } return false ; } can be: return (rm1.getAdminService().getServiceStatus().getState().equals(HAServiceState.ACTIVE) && rm2.getAdminService().getServiceStatus().getState().equals(HAServiceState.STANDBY)) || (rm1.getAdminService().getServiceStatus().getState() .equals(HAServiceState.STANDBY) && rm2.getAdminService() .getServiceStatus().getState().equals(HAServiceState.ACTIVE)); Following comments are minor nits: LeaderElectorService and TestLeaderElectorService includes lines which exceed 80 characters. Should we name a variable, TestingCluster cluster in TestLeaderElectorService, zkCluster to make it clear? Found a typo: public void testRMFailToTransitionToActive() throws Exception{ ... Thread laucnRM = new Thread () { ... } We can remove unused imports in LeaderElectorService , TestLeaderElectorService , ResourceManager . LeaderElectorService.java ... import org.apache.hadoop.fs.CommonConfigurationKeys; ... import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; TestLeaderElectorService.java ... import static org.mockito.Mockito.spy; ... ResourceManager.java ... import org.apache.curator.framework.recipes.leader.LeaderLatchListener; ... This is just a question - why did you change an argument of RMAuditLogger.logFailure , target, from RMHAProtocolService to RM ? AdminService.java @@ -319,7 +323,7 @@ public synchronized void transitionToActive( rm.transitionToActive(); } catch (Exception e) { RMAuditLogger.logFailure(user.getShortUserName(), "transitionToActive" , - "", " RMHAProtocolService", + "", " RM", "Exception transitioning to active" ); throw new ServiceFailedException( "Error when transitioning to Active mode" , e); @@ -338,7 +342,7 @@ public synchronized void transitionToActive( "Error on refreshAll during transistion to Active" , e); } RMAuditLogger.logSuccess(user.getShortUserName(), "transitionToActive" , - "RMHAProtocolService" ); + "RM" ); }
        Hide
        kasha Karthik Kambatla added a comment -

        Would very much like for us to use Curator for leader election. May be, HDFS could also do the same in the future.

        Quickly skimmed through the patch. High-level comments:

        1. IIRR we use the same ZK-quorum for both leader election and the store. Can we re-use the CuratorFramework so leader-election and store-operations are fully consistent. Otherwise, the clients (and their individual timeouts etc.) could lead to inconsistencies?
        2. Would it be possible to hide the implementation of the leader-election - ActiveStandbyElector vs CuratorElector - behind EmbeddedElector? AdminService or RM don't need to know the details?

        In any case, having written some Curator code in the past, I would like to review the code more closely.

        Show
        kasha Karthik Kambatla added a comment - Would very much like for us to use Curator for leader election. May be, HDFS could also do the same in the future. Quickly skimmed through the patch. High-level comments: IIRR we use the same ZK-quorum for both leader election and the store. Can we re-use the CuratorFramework so leader-election and store-operations are fully consistent. Otherwise, the clients (and their individual timeouts etc.) could lead to inconsistencies? Would it be possible to hide the implementation of the leader-election - ActiveStandbyElector vs CuratorElector - behind EmbeddedElector? AdminService or RM don't need to know the details? In any case, having written some Curator code in the past, I would like to review the code more closely.
        Hide
        jianhe Jian He added a comment -

        Otherwise, the clients (and their individual timeouts etc.) could lead to inconsistencies?

        agree, I was actually talking about the same with Xuan Gong offline, didn't do that just because wanted to keep the change minimal. I'll make the change accordingly.

        Would it be possible to hide the implementation of the leader-election - ActiveStandbyElector vs CuratorElector - behind EmbeddedElector?

        I'm thinking to remove EmbeddedElectorService later on and separate the LeaderElectorService out from the AdminService. does that make sense ?

        Show
        jianhe Jian He added a comment - Otherwise, the clients (and their individual timeouts etc.) could lead to inconsistencies? agree, I was actually talking about the same with Xuan Gong offline, didn't do that just because wanted to keep the change minimal. I'll make the change accordingly. Would it be possible to hide the implementation of the leader-election - ActiveStandbyElector vs CuratorElector - behind EmbeddedElector? I'm thinking to remove EmbeddedElectorService later on and separate the LeaderElectorService out from the AdminService. does that make sense ?
        Hide
        jianhe Jian He added a comment -

        Tsuyoshi Ozawa, thanks for reviewing the patch, I fixed your comments;

        why did you change an argument of RMAuditLogger.logFailure, target, from

        because I feel semantically rm is the target,

        I'll make the change accordingly.

        I tried to modify, but found more refactorings are needed than expected. I'd like to this as a followup work.

        Show
        jianhe Jian He added a comment - Tsuyoshi Ozawa , thanks for reviewing the patch, I fixed your comments; why did you change an argument of RMAuditLogger.logFailure, target, from because I feel semantically rm is the target, I'll make the change accordingly. I tried to modify, but found more refactorings are needed than expected. I'd like to this as a followup work.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 8m 33s trunk passed
        +1 compile 2m 8s trunk passed with JDK v1.8.0_66
        +1 compile 2m 24s trunk passed with JDK v1.7.0_91
        +1 checkstyle 0m 29s trunk passed
        +1 mvnsite 1m 15s trunk passed
        +1 mvneclipse 0m 31s trunk passed
        +1 findbugs 2m 56s trunk passed
        -1 javadoc 0m 25s hadoop-yarn-server-resourcemanager in trunk failed with JDK v1.8.0_66.
        +1 javadoc 3m 28s trunk passed with JDK v1.7.0_91
        +1 mvninstall 1m 7s the patch passed
        +1 compile 1m 51s the patch passed with JDK v1.8.0_66
        +1 javac 1m 51s the patch passed
        +1 compile 2m 20s the patch passed with JDK v1.7.0_91
        +1 javac 2m 20s the patch passed
        -1 checkstyle 0m 30s Patch generated 14 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 365, now 378).
        +1 mvnsite 1m 18s the patch passed
        +1 mvneclipse 0m 31s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        +1 findbugs 2m 58s the patch passed
        -1 javadoc 0m 28s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
        +1 javadoc 3m 25s the patch passed with JDK v1.7.0_91
        -1 unit 0m 28s hadoop-yarn-api in the patch failed with JDK v1.8.0_66.
        -1 unit 67m 59s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
        -1 unit 0m 24s hadoop-yarn-api in the patch failed with JDK v1.7.0_91.
        -1 unit 69m 51s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
        -1 asflicense 0m 21s Patch generated 2 ASF License warnings.
        178m 37s



        Reason Tests
        JDK v1.8.0_66 Failed junit tests hadoop.yarn.conf.TestYarnConfigurationFields
          hadoop.yarn.server.resourcemanager.TestRMRestart
          hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.TestAMAuthorization
        JDK v1.7.0_91 Failed junit tests hadoop.yarn.conf.TestYarnConfigurationFields
          hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher
          hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.TestAMAuthorization



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:0ca8df7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12777235/YARN-4438.2.patch
        JIRA Issue YARN-4438
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux b024ae091fb6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / f5a9114
        findbugs v3.0.0
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.7.0_91.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.7.0_91.txt https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9940/testReport/
        asflicense https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-asflicense-problems.txt
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn
        Max memory used 75MB
        Powered by Apache Yetus 0.1.0-SNAPSHOT http://yetus.apache.org
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/9940/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 8m 33s trunk passed +1 compile 2m 8s trunk passed with JDK v1.8.0_66 +1 compile 2m 24s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 29s trunk passed +1 mvnsite 1m 15s trunk passed +1 mvneclipse 0m 31s trunk passed +1 findbugs 2m 56s trunk passed -1 javadoc 0m 25s hadoop-yarn-server-resourcemanager in trunk failed with JDK v1.8.0_66. +1 javadoc 3m 28s trunk passed with JDK v1.7.0_91 +1 mvninstall 1m 7s the patch passed +1 compile 1m 51s the patch passed with JDK v1.8.0_66 +1 javac 1m 51s the patch passed +1 compile 2m 20s the patch passed with JDK v1.7.0_91 +1 javac 2m 20s the patch passed -1 checkstyle 0m 30s Patch generated 14 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 365, now 378). +1 mvnsite 1m 18s the patch passed +1 mvneclipse 0m 31s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 58s the patch passed -1 javadoc 0m 28s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. +1 javadoc 3m 25s the patch passed with JDK v1.7.0_91 -1 unit 0m 28s hadoop-yarn-api in the patch failed with JDK v1.8.0_66. -1 unit 67m 59s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 0m 24s hadoop-yarn-api in the patch failed with JDK v1.7.0_91. -1 unit 69m 51s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. -1 asflicense 0m 21s Patch generated 2 ASF License warnings. 178m 37s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.conf.TestYarnConfigurationFields   hadoop.yarn.server.resourcemanager.TestRMRestart   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_91 Failed junit tests hadoop.yarn.conf.TestYarnConfigurationFields   hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12777235/YARN-4438.2.patch JIRA Issue YARN-4438 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux b024ae091fb6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / f5a9114 findbugs v3.0.0 javadoc https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.7.0_91.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.7.0_91.txt https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9940/testReport/ asflicense https://builds.apache.org/job/PreCommit-YARN-Build/9940/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn Max memory used 75MB Powered by Apache Yetus 0.1.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9940/console This message was automatically generated.
        Hide
        jianhe Jian He added a comment -

        Fixed some warnings

        Show
        jianhe Jian He added a comment - Fixed some warnings
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
        +1 mvninstall 7m 28s trunk passed
        +1 compile 1m 53s trunk passed with JDK v1.8.0_66
        +1 compile 2m 7s trunk passed with JDK v1.7.0_91
        +1 checkstyle 0m 27s trunk passed
        +1 mvnsite 1m 4s trunk passed
        +1 mvneclipse 0m 27s trunk passed
        +1 findbugs 2m 32s trunk passed
        +1 javadoc 1m 0s trunk passed with JDK v1.8.0_66
        +1 javadoc 3m 10s trunk passed with JDK v1.7.0_91
        +1 mvninstall 1m 1s the patch passed
        +1 compile 1m 45s the patch passed with JDK v1.8.0_66
        +1 javac 1m 45s the patch passed
        +1 compile 2m 6s the patch passed with JDK v1.7.0_91
        +1 javac 2m 6s the patch passed
        -1 checkstyle 0m 28s Patch generated 7 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 316, now 322).
        +1 mvnsite 1m 4s the patch passed
        +1 mvneclipse 0m 27s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        +1 findbugs 2m 48s the patch passed
        +1 javadoc 1m 0s the patch passed with JDK v1.8.0_66
        +1 javadoc 3m 11s the patch passed with JDK v1.7.0_91
        +1 unit 0m 22s hadoop-yarn-api in the patch passed with JDK v1.8.0_66.
        -1 unit 64m 27s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
        +1 unit 0m 25s hadoop-yarn-api in the patch passed with JDK v1.7.0_91.
        -1 unit 66m 22s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
        +1 asflicense 0m 23s Patch does not generate ASF License warnings.
        167m 20s



        Reason Tests
        JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokenAuthentication
          hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.TestAMAuthorization
        JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA
          hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.TestAMAuthorization



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:0ca8df7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12777553/YARN-4438.3.patch
        JIRA Issue YARN-4438
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux f76bce34cdae 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / de522d2
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9972/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/9972/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/9972/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9972/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9972/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9972/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn
        Max memory used 76MB
        Powered by Apache Yetus 0.1.0 http://yetus.apache.org
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/9972/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 7m 28s trunk passed +1 compile 1m 53s trunk passed with JDK v1.8.0_66 +1 compile 2m 7s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 27s trunk passed +1 mvnsite 1m 4s trunk passed +1 mvneclipse 0m 27s trunk passed +1 findbugs 2m 32s trunk passed +1 javadoc 1m 0s trunk passed with JDK v1.8.0_66 +1 javadoc 3m 10s trunk passed with JDK v1.7.0_91 +1 mvninstall 1m 1s the patch passed +1 compile 1m 45s the patch passed with JDK v1.8.0_66 +1 javac 1m 45s the patch passed +1 compile 2m 6s the patch passed with JDK v1.7.0_91 +1 javac 2m 6s the patch passed -1 checkstyle 0m 28s Patch generated 7 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 316, now 322). +1 mvnsite 1m 4s the patch passed +1 mvneclipse 0m 27s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 48s the patch passed +1 javadoc 1m 0s the patch passed with JDK v1.8.0_66 +1 javadoc 3m 11s the patch passed with JDK v1.7.0_91 +1 unit 0m 22s hadoop-yarn-api in the patch passed with JDK v1.8.0_66. -1 unit 64m 27s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. +1 unit 0m 25s hadoop-yarn-api in the patch passed with JDK v1.7.0_91. -1 unit 66m 22s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 23s Patch does not generate ASF License warnings. 167m 20s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokenAuthentication   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12777553/YARN-4438.3.patch JIRA Issue YARN-4438 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux f76bce34cdae 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / de522d2 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9972/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9972/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9972/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9972/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9972/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9972/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn Max memory used 76MB Powered by Apache Yetus 0.1.0 http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9972/console This message was automatically generated.
        Hide
        kasha Karthik Kambatla added a comment -

        I'll make the change accordingly.

        Not sure I understand. Regarding a single client for elector and store, are we doing it in this patch or punting to a follow-up. I am open to either options, would prefer doing it here. All that is needed is store the CuratorFramework instance in RMContext.

        Comments on the patch itself:

        1. LeaderElectorService
          1. The instance, rm, is not used anywhere. Why even pass it? The RMContext should have all the information that the elector needs or updates?
          2. Use conf instead of getConfig() here?
            String zkBasePath = getConfig().get(
                    YarnConfiguration.AUTO_FAILOVER_ZK_BASE_PATH,
                    YarnConfiguration.DEFAULT_AUTO_FAILOVER_ZK_BASE_PATH);
                latchPath = zkBasePath + "/" + clusterId;
            
          3. isLeader(): update log messages to say - "rmId is elected leader, transitioning to active". On failure, "Failed to transition to active, giving up leadership"?
          4. reJoinElection: why sleep for 1 second?
          5. reJoinElection: Also, on exception, it might not be enough to just log it. If it is due to close(), don't we want to force give-up so the other RM becomes active? If it is on initAndStartLeaderLatch(), this RM will never become active; don't we want to just die?
          6. How about adding a method called closeLeaderLatch to complement initAndStart? That would help us avoid cases like the null check missing in rejoinElection?
          7. notLeader: Again, we should likely do more than just logging.
        2. YarnConfiguration: If our long-term plan is to keep the curator version and get rid of EmbeddedElectorService, may be we should have a config to use embedded-elector instead of curator-elector. e.g. yarn.resourcemanager.ha.use-active-standby-elector and set it to true by default for now?
        3. ResourceManager:
          1. Nit: Spurious import changes - prefer leaving them out or fixing them in a separate patch before/after.
          2. Why change the argument to transitionToStandby from true to false?
          3. Comment: Looking closer, in the following method, reinitialize(initialize) should be called outside the if. No? I am surprised we haven't noticed this before. May be, fix it on another JIRA?
        4. Looking at the changes in AdminService and ResourceManager, I still feel the AdminService should be the one handling the LeaderElectorService. Also, the LeaderElectorService talks to AdminService for transitions to active/standby.
        Show
        kasha Karthik Kambatla added a comment - I'll make the change accordingly. Not sure I understand. Regarding a single client for elector and store, are we doing it in this patch or punting to a follow-up. I am open to either options, would prefer doing it here. All that is needed is store the CuratorFramework instance in RMContext. Comments on the patch itself: LeaderElectorService The instance, rm, is not used anywhere. Why even pass it? The RMContext should have all the information that the elector needs or updates? Use conf instead of getConfig() here? String zkBasePath = getConfig().get( YarnConfiguration.AUTO_FAILOVER_ZK_BASE_PATH, YarnConfiguration.DEFAULT_AUTO_FAILOVER_ZK_BASE_PATH); latchPath = zkBasePath + "/" + clusterId; isLeader(): update log messages to say - "rmId is elected leader, transitioning to active". On failure, "Failed to transition to active, giving up leadership"? reJoinElection: why sleep for 1 second? reJoinElection: Also, on exception, it might not be enough to just log it. If it is due to close(), don't we want to force give-up so the other RM becomes active? If it is on initAndStartLeaderLatch(), this RM will never become active; don't we want to just die? How about adding a method called closeLeaderLatch to complement initAndStart? That would help us avoid cases like the null check missing in rejoinElection? notLeader: Again, we should likely do more than just logging. YarnConfiguration: If our long-term plan is to keep the curator version and get rid of EmbeddedElectorService, may be we should have a config to use embedded-elector instead of curator-elector. e.g. yarn.resourcemanager.ha.use-active-standby-elector and set it to true by default for now? ResourceManager: Nit: Spurious import changes - prefer leaving them out or fixing them in a separate patch before/after. Why change the argument to transitionToStandby from true to false? Comment: Looking closer, in the following method, reinitialize(initialize) should be called outside the if. No? I am surprised we haven't noticed this before. May be, fix it on another JIRA? Looking at the changes in AdminService and ResourceManager, I still feel the AdminService should be the one handling the LeaderElectorService. Also, the LeaderElectorService talks to AdminService for transitions to active/standby.
        Hide
        jianhe Jian He added a comment -

        Thanks for detailed review !

        All that is needed is store the CuratorFramework instance in RMContext.

        Actually, I need to refactor out the zkClient creation logic from ZKRMStateStore as the zkClient is requiring a bunch of other configs. And because ZKRMStateStore is currently in active service, it cannot be simply moved to AlwaysOn service. So, I'd like to do it separately to minimize the core change in this jira.

        The instance, rm, is not used anywhere. Why even pass it?

        I was ealier directly calling rm.transitionToActive instead of calling AdminService#transitionToActive. But just to minimize the change and keep it consistent with EmbeddedElectorService, I changed to call AdminService#transitionToActive.
        The only extra thing AdminService does is to refresh the ACLs. Suppose the shared storage based configraion provider is not enabled(which is the most usual case), why do we need to call refresh the configs? It cannot read the remote RM's config anyway. Without calling these refresh calls, we can avoid bugs like YARN-3893. Also, RM itself does not need to depend on the AdminACl for it to transition to active/standby. It should always has the permission to do that. I'd like to change this part for RM to not refresh the configs if shared storage based config provider is not enabled.

        why sleep for 1 second

        To avoid a busy loop and rejoining immediately. That's what ActiveStandbyElector does too. It could be more than 1s. I don't think we need one more config for this.

        If it is due to close(), don't we want to force give-up so the other RM becomes active? If it is on initAndStartLeaderLatch(), this RM will never become active; don't we want to just die?

        What do you mean by force give-up ? exit RM ?
        The underlying curator implementation will retry the connection in background, even though the exception is thrown. See Guaranteeable interface in Curator. I think exit RM is too harsh here. Even though RM remains at standby, all services should be already shutdown, so there's no harm to the end users ?

        I have one question about ActiveStandbyCheckThread. if we make zkStateStore and elector to share the same zkClient, do we still need the ActiveStandbyCheckThread ? the elector itself should get notification when the connection is lost.

        notLeader: Again, we should likely do more than just logging.

        This is currently what EmbeddedElectorService is doing. If the leadership is already lost from zk's perspective, the other RM should take up the leadership

        How about adding a method called closeLeaderLatch to complement initAndStart? That would help us avoid cases like the null check missing in rejoinElection?

        I think leaderLatch could never be null ?

        may be we should have a config to use embedded-elector instead of curator-elector e.g. yarn.resourcemanager.ha.use-active-standby-elector

        This flag is just a temporary thing, a lot of test cases need to be changed without this flag. I plan to remove this flag and the embeddedElector code too in followup.

        Why change the argument to transitionToStandby from true to false? in the following method, reinitialize(initialize) should be called outside the if. No?

        Why does it need to be called outside of if (state == HAServiceProtocol.HAServiceState.ACTIVE) ? This is a fresh start, it does not need to call reinitiialize.

        still feel the AdminService should be the one handling the LeaderElectorService. Also, the LeaderElectorService talks to AdminService for transitions to active/standby.

        Currently, AdminService does not depend on EmbeddedLeaderElector at all. All it does is to initialize EmbeddedElectorService. May be the elector does not need to depend on AdminService too, i.e. not need to refresh the acls if shared storage based config provider is not enabled.

        Will update other comments accordingly.

        Show
        jianhe Jian He added a comment - Thanks for detailed review ! All that is needed is store the CuratorFramework instance in RMContext. Actually, I need to refactor out the zkClient creation logic from ZKRMStateStore as the zkClient is requiring a bunch of other configs. And because ZKRMStateStore is currently in active service, it cannot be simply moved to AlwaysOn service. So, I'd like to do it separately to minimize the core change in this jira. The instance, rm, is not used anywhere. Why even pass it? I was ealier directly calling rm.transitionToActive instead of calling AdminService#transitionToActive. But just to minimize the change and keep it consistent with EmbeddedElectorService, I changed to call AdminService#transitionToActive. The only extra thing AdminService does is to refresh the ACLs. Suppose the shared storage based configraion provider is not enabled(which is the most usual case), why do we need to call refresh the configs? It cannot read the remote RM's config anyway. Without calling these refresh calls, we can avoid bugs like YARN-3893 . Also, RM itself does not need to depend on the AdminACl for it to transition to active/standby. It should always has the permission to do that. I'd like to change this part for RM to not refresh the configs if shared storage based config provider is not enabled. why sleep for 1 second To avoid a busy loop and rejoining immediately. That's what ActiveStandbyElector does too. It could be more than 1s. I don't think we need one more config for this. If it is due to close(), don't we want to force give-up so the other RM becomes active? If it is on initAndStartLeaderLatch(), this RM will never become active; don't we want to just die? What do you mean by force give-up ? exit RM ? The underlying curator implementation will retry the connection in background, even though the exception is thrown. See Guaranteeable interface in Curator. I think exit RM is too harsh here. Even though RM remains at standby, all services should be already shutdown, so there's no harm to the end users ? I have one question about ActiveStandbyCheckThread. if we make zkStateStore and elector to share the same zkClient, do we still need the ActiveStandbyCheckThread ? the elector itself should get notification when the connection is lost. notLeader: Again, we should likely do more than just logging. This is currently what EmbeddedElectorService is doing. If the leadership is already lost from zk's perspective, the other RM should take up the leadership How about adding a method called closeLeaderLatch to complement initAndStart? That would help us avoid cases like the null check missing in rejoinElection? I think leaderLatch could never be null ? may be we should have a config to use embedded-elector instead of curator-elector e.g. yarn.resourcemanager.ha.use-active-standby-elector This flag is just a temporary thing, a lot of test cases need to be changed without this flag. I plan to remove this flag and the embeddedElector code too in followup. Why change the argument to transitionToStandby from true to false? in the following method, reinitialize(initialize) should be called outside the if. No? Why does it need to be called outside of if (state == HAServiceProtocol.HAServiceState.ACTIVE) ? This is a fresh start, it does not need to call reinitiialize. still feel the AdminService should be the one handling the LeaderElectorService. Also, the LeaderElectorService talks to AdminService for transitions to active/standby. Currently, AdminService does not depend on EmbeddedLeaderElector at all. All it does is to initialize EmbeddedElectorService. May be the elector does not need to depend on AdminService too, i.e. not need to refresh the acls if shared storage based config provider is not enabled. Will update other comments accordingly.
        Hide
        kasha Karthik Kambatla added a comment -

        And because ZKRMStateStore is currently in active service, it cannot be simply moved to AlwaysOn service. So, I'd like to do it separately to minimize the core change in this jira.

        Fine with separate JIRA. Not sure I understand why ZKRMStateStore needs to be an AlwaysOn service.

        I'd like to change this part for RM to not refresh the configs if shared storage based config provider is not enabled.

        I was never a fan of the shared-storage-configuration stuff. Now that we have it, don't think we can get rid of it until Hadoop 4. How would this change look? The RM has an instance of the elector; every time we transition to active, will either the RM or the elector check if shared-storage-config-provider is enabled and call refresh?

        But yeah, I do see the point of calling these methods directly from RM.

        To avoid a busy loop and rejoining immediately.

        If we rejoin immediately, one of the RMs would become Active. It is not like the RM is going to use the cycles for anything else if we sleep. Is the concern that Curator may be biased in picking an RM in certain conditions?

        What do you mean by force give-up ? exit RM ?

        If leaderLatch.close() throws an exception, when does Curator realize the RM is not participating in the election anymore? If not, it might keep electing the same RM active? How do we handle this, and how long of a wait is okay?

        Even though RM remains at standby, all services should be already shutdown, so there's no harm to the end users ?

        Agree, there is no harm. My concern is about availability - having one of the RMs active "most" of the time.

        I have one question about ActiveStandbyCheckThread. if we make zkStateStore and elector to share the same zkClient, do we still need the ActiveStandbyCheckThread ? the elector itself should get notification when the connection is lost.

        Are you referring to the VerifyActiveStatusThread? The RM loses leadership; the connection can be restored even if it loses. We could actively go stop the store if it hasn't already stopped. The store would have already gotten fenced, so we don't run the risk of corrupting the store. So, you are right, we might not need that thread.

        This is currently what EmbeddedElectorService is doing. If the leadership is already lost from zk's perspective, the other RM should take up the leadership

        You are right, it isn't a big deal. Just realized EmbeddedElectorService does the same today. Haven't seen Curator's LeaderLatch code. What happens if this RM is subsequently elected leader? Does the transition to Active succeed just fine? Or, is it possible it gets stuck in a way it can't transition to active? If it gets into such a situation, we should consider crashing it altogether.

        I think leaderLatch could never be null ?

        Seeing all the NPEs we have in RM/Scheduler, I would like for us to err on the side of caution and do null-checks. If not, we at least need to make it consistent everywhere.

        Why does it need to be called outside of if (state == HAServiceProtocol.HAServiceState.ACTIVE) ? This is a fresh start, it does not need to call reinitiialize.

        You are right. Sorry for the noise, clearly it has been a while since I looked at this code.

        Show
        kasha Karthik Kambatla added a comment - And because ZKRMStateStore is currently in active service, it cannot be simply moved to AlwaysOn service. So, I'd like to do it separately to minimize the core change in this jira. Fine with separate JIRA. Not sure I understand why ZKRMStateStore needs to be an AlwaysOn service. I'd like to change this part for RM to not refresh the configs if shared storage based config provider is not enabled. I was never a fan of the shared-storage-configuration stuff. Now that we have it, don't think we can get rid of it until Hadoop 4. How would this change look? The RM has an instance of the elector; every time we transition to active, will either the RM or the elector check if shared-storage-config-provider is enabled and call refresh? But yeah, I do see the point of calling these methods directly from RM. To avoid a busy loop and rejoining immediately. If we rejoin immediately, one of the RMs would become Active. It is not like the RM is going to use the cycles for anything else if we sleep. Is the concern that Curator may be biased in picking an RM in certain conditions? What do you mean by force give-up ? exit RM ? If leaderLatch.close() throws an exception, when does Curator realize the RM is not participating in the election anymore? If not, it might keep electing the same RM active? How do we handle this, and how long of a wait is okay? Even though RM remains at standby, all services should be already shutdown, so there's no harm to the end users ? Agree, there is no harm. My concern is about availability - having one of the RMs active "most" of the time. I have one question about ActiveStandbyCheckThread. if we make zkStateStore and elector to share the same zkClient, do we still need the ActiveStandbyCheckThread ? the elector itself should get notification when the connection is lost. Are you referring to the VerifyActiveStatusThread? The RM loses leadership; the connection can be restored even if it loses. We could actively go stop the store if it hasn't already stopped. The store would have already gotten fenced, so we don't run the risk of corrupting the store. So, you are right, we might not need that thread. This is currently what EmbeddedElectorService is doing. If the leadership is already lost from zk's perspective, the other RM should take up the leadership You are right, it isn't a big deal. Just realized EmbeddedElectorService does the same today. Haven't seen Curator's LeaderLatch code. What happens if this RM is subsequently elected leader? Does the transition to Active succeed just fine? Or, is it possible it gets stuck in a way it can't transition to active? If it gets into such a situation, we should consider crashing it altogether. I think leaderLatch could never be null ? Seeing all the NPEs we have in RM/Scheduler, I would like for us to err on the side of caution and do null-checks. If not, we at least need to make it consistent everywhere. Why does it need to be called outside of if (state == HAServiceProtocol.HAServiceState.ACTIVE) ? This is a fresh start, it does not need to call reinitiialize. You are right. Sorry for the noise, clearly it has been a while since I looked at this code.
        Hide
        jianhe Jian He added a comment -

        Not sure I understand why ZKRMStateStore needs to be an AlwaysOn service.

        It does not need to be always on, just the zkClient in ZKRMStateStore needs to be always on.

        How would this change look?

        At first glance, in AdminService#transitionToStandby and transitionToActive, not call refreshAll if the shared-storage-config-provider is not enabled.

        Is the concern that Curator may be biased in picking an RM in certain conditions?

        Yeah, that's just my guess. Immediately rejonning may have more chance to take leadership again. ActiveStandbyElector#reJoinElectionAfterFailureToBecomeActive has similar comments.

        If leaderLatch.close() throws an exception, when does Curator realize the RM is not participating in the election anymore?

        Based on my understanding, I think curator will realize when it does not hear RM for the zkSessionTimeout period. Essentially, the zkClient at RM side will keep retrying to notify zk quorum that this client is closed. If close successds, zk quorum will get notified immediately and re-selects a leader. If close is kept retrying for beyond zkSessionTimeout, zk quorum will assume this client dies and re-selects a leader.

        we might not need that thread.

        Then, we can remove this thread ? I'll do separately if you agree.

        What happens if this RM is subsequently elected leader? Does the transition to Active succeed just fine?

        I think it can transition to active next time it's selected as leader. The previous failure will most likely happen on refreshAcl.

        I would like for us to err on the side of caution and do null-checks.

        will do

        Show
        jianhe Jian He added a comment - Not sure I understand why ZKRMStateStore needs to be an AlwaysOn service. It does not need to be always on, just the zkClient in ZKRMStateStore needs to be always on. How would this change look? At first glance, in AdminService#transitionToStandby and transitionToActive, not call refreshAll if the shared-storage-config-provider is not enabled. Is the concern that Curator may be biased in picking an RM in certain conditions? Yeah, that's just my guess. Immediately rejonning may have more chance to take leadership again. ActiveStandbyElector#reJoinElectionAfterFailureToBecomeActive has similar comments. If leaderLatch.close() throws an exception, when does Curator realize the RM is not participating in the election anymore? Based on my understanding, I think curator will realize when it does not hear RM for the zkSessionTimeout period. Essentially, the zkClient at RM side will keep retrying to notify zk quorum that this client is closed. If close successds, zk quorum will get notified immediately and re-selects a leader. If close is kept retrying for beyond zkSessionTimeout, zk quorum will assume this client dies and re-selects a leader. we might not need that thread. Then, we can remove this thread ? I'll do separately if you agree. What happens if this RM is subsequently elected leader? Does the transition to Active succeed just fine? I think it can transition to active next time it's selected as leader. The previous failure will most likely happen on refreshAcl. I would like for us to err on the side of caution and do null-checks. will do
        Hide
        jianhe Jian He added a comment -

        upload a new patch that fixed the comments

        Show
        jianhe Jian He added a comment - upload a new patch that fixed the comments
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
        +1 mvninstall 7m 36s trunk passed
        +1 compile 1m 45s trunk passed with JDK v1.8.0_66
        +1 compile 2m 6s trunk passed with JDK v1.7.0_91
        +1 checkstyle 0m 32s trunk passed
        +1 mvnsite 1m 6s trunk passed
        +1 mvneclipse 0m 27s trunk passed
        +1 findbugs 2m 34s trunk passed
        +1 javadoc 1m 0s trunk passed with JDK v1.8.0_66
        +1 javadoc 3m 15s trunk passed with JDK v1.7.0_91
        -1 mvninstall 0m 27s hadoop-yarn-server-resourcemanager in the patch failed.
        -1 compile 1m 22s hadoop-yarn in the patch failed with JDK v1.8.0_66.
        -1 javac 1m 22s hadoop-yarn in the patch failed with JDK v1.8.0_66.
        -1 compile 1m 33s hadoop-yarn in the patch failed with JDK v1.7.0_91.
        -1 javac 1m 33s hadoop-yarn in the patch failed with JDK v1.7.0_91.
        -1 checkstyle 0m 31s Patch generated 7 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 316, now 322).
        -1 mvnsite 0m 35s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 mvneclipse 0m 26s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        -1 findbugs 0m 32s hadoop-yarn-server-resourcemanager in the patch failed.
        -1 javadoc 0m 24s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
        +1 javadoc 3m 21s the patch passed with JDK v1.7.0_91
        +1 unit 0m 22s hadoop-yarn-api in the patch passed with JDK v1.8.0_66.
        -1 unit 0m 30s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
        +1 unit 0m 25s hadoop-yarn-api in the patch passed with JDK v1.7.0_91.
        -1 unit 0m 29s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
        +1 asflicense 0m 19s Patch does not generate ASF License warnings.
        36m 17s



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:0ca8df7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12779714/YARN-4438.4.patch
        JIRA Issue YARN-4438
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 73fcb0662fe6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / d0a22ba
        Default Java 1.7.0_91
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
        findbugs v3.0.0
        mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        compile https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn-jdk1.8.0_66.txt
        javac https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn-jdk1.8.0_66.txt
        compile https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn-jdk1.7.0_91.txt
        javac https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn-jdk1.7.0_91.txt
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
        mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        findbugs https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10108/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn
        Max memory used 76MB
        Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/10108/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 7m 36s trunk passed +1 compile 1m 45s trunk passed with JDK v1.8.0_66 +1 compile 2m 6s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 32s trunk passed +1 mvnsite 1m 6s trunk passed +1 mvneclipse 0m 27s trunk passed +1 findbugs 2m 34s trunk passed +1 javadoc 1m 0s trunk passed with JDK v1.8.0_66 +1 javadoc 3m 15s trunk passed with JDK v1.7.0_91 -1 mvninstall 0m 27s hadoop-yarn-server-resourcemanager in the patch failed. -1 compile 1m 22s hadoop-yarn in the patch failed with JDK v1.8.0_66. -1 javac 1m 22s hadoop-yarn in the patch failed with JDK v1.8.0_66. -1 compile 1m 33s hadoop-yarn in the patch failed with JDK v1.7.0_91. -1 javac 1m 33s hadoop-yarn in the patch failed with JDK v1.7.0_91. -1 checkstyle 0m 31s Patch generated 7 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 316, now 322). -1 mvnsite 0m 35s hadoop-yarn-server-resourcemanager in the patch failed. +1 mvneclipse 0m 26s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. -1 findbugs 0m 32s hadoop-yarn-server-resourcemanager in the patch failed. -1 javadoc 0m 24s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. +1 javadoc 3m 21s the patch passed with JDK v1.7.0_91 +1 unit 0m 22s hadoop-yarn-api in the patch passed with JDK v1.8.0_66. -1 unit 0m 30s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. +1 unit 0m 25s hadoop-yarn-api in the patch passed with JDK v1.7.0_91. -1 unit 0m 29s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 19s Patch does not generate ASF License warnings. 36m 17s Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12779714/YARN-4438.4.patch JIRA Issue YARN-4438 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 73fcb0662fe6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / d0a22ba Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt compile https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn-jdk1.8.0_66.txt javac https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn-jdk1.8.0_66.txt compile https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn-jdk1.7.0_91.txt javac https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn-jdk1.7.0_91.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt findbugs https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10108/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10108/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn Max memory used 76MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/10108/console This message was automatically generated.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
        +1 mvninstall 8m 3s trunk passed
        +1 compile 2m 8s trunk passed with JDK v1.8.0_66
        +1 compile 2m 18s trunk passed with JDK v1.7.0_91
        +1 checkstyle 0m 33s trunk passed
        +1 mvnsite 1m 10s trunk passed
        +1 mvneclipse 0m 27s trunk passed
        +1 findbugs 2m 51s trunk passed
        +1 javadoc 1m 16s trunk passed with JDK v1.8.0_66
        +1 javadoc 3m 56s trunk passed with JDK v1.7.0_91
        +1 mvninstall 1m 4s the patch passed
        +1 compile 2m 20s the patch passed with JDK v1.8.0_66
        +1 javac 2m 20s the patch passed
        +1 compile 2m 35s the patch passed with JDK v1.7.0_91
        +1 javac 2m 35s the patch passed
        -1 checkstyle 0m 39s Patch generated 7 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 315, now 320).
        +1 mvnsite 1m 13s the patch passed
        +1 mvneclipse 0m 27s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        +1 findbugs 3m 15s the patch passed
        -1 javadoc 0m 26s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
        +1 javadoc 3m 29s the patch passed with JDK v1.7.0_91
        +1 unit 0m 27s hadoop-yarn-api in the patch passed with JDK v1.8.0_66.
        -1 unit 63m 0s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
        +1 unit 0m 25s hadoop-yarn-api in the patch passed with JDK v1.7.0_91.
        -1 unit 61m 58s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
        +1 asflicense 0m 20s Patch does not generate ASF License warnings.
        166m 36s



        Reason Tests
        JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
          hadoop.yarn.server.resourcemanager.TestClientRMTokens
        JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
          hadoop.yarn.server.resourcemanager.TestClientRMTokens



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:0ca8df7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12779886/YARN-4438.5.patch
        JIRA Issue YARN-4438
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux d52f2296af9c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 5273413
        Default Java 1.7.0_91
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10117/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/10117/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/10117/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/10117/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10117/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10117/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10117/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn
        Max memory used 76MB
        Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/10117/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 8m 3s trunk passed +1 compile 2m 8s trunk passed with JDK v1.8.0_66 +1 compile 2m 18s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 33s trunk passed +1 mvnsite 1m 10s trunk passed +1 mvneclipse 0m 27s trunk passed +1 findbugs 2m 51s trunk passed +1 javadoc 1m 16s trunk passed with JDK v1.8.0_66 +1 javadoc 3m 56s trunk passed with JDK v1.7.0_91 +1 mvninstall 1m 4s the patch passed +1 compile 2m 20s the patch passed with JDK v1.8.0_66 +1 javac 2m 20s the patch passed +1 compile 2m 35s the patch passed with JDK v1.7.0_91 +1 javac 2m 35s the patch passed -1 checkstyle 0m 39s Patch generated 7 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 315, now 320). +1 mvnsite 1m 13s the patch passed +1 mvneclipse 0m 27s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 3m 15s the patch passed -1 javadoc 0m 26s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. +1 javadoc 3m 29s the patch passed with JDK v1.7.0_91 +1 unit 0m 27s hadoop-yarn-api in the patch passed with JDK v1.8.0_66. -1 unit 63m 0s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. +1 unit 0m 25s hadoop-yarn-api in the patch passed with JDK v1.7.0_91. -1 unit 61m 58s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 20s Patch does not generate ASF License warnings. 166m 36s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.TestClientRMTokens JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.TestClientRMTokens Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12779886/YARN-4438.5.patch JIRA Issue YARN-4438 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux d52f2296af9c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 5273413 Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10117/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/10117/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10117/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10117/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10117/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/10117/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10117/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn Max memory used 76MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/10117/console This message was automatically generated.
        Hide
        kasha Karthik Kambatla added a comment -

        +1, pending this nit: LeaderElectorService has a field "rm" that is never used.

        Feel free to commit it post this change and a clean Jenkins run.

        Show
        kasha Karthik Kambatla added a comment - +1, pending this nit: LeaderElectorService has a field "rm" that is never used. Feel free to commit it post this change and a clean Jenkins run.
        Hide
        jianhe Jian He added a comment -

        thanks for the reviewing the patch !
        attached a new patch

        Show
        jianhe Jian He added a comment - thanks for the reviewing the patch ! attached a new patch
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
        +1 mvninstall 7m 30s trunk passed
        +1 compile 1m 42s trunk passed with JDK v1.8.0_66
        +1 compile 2m 3s trunk passed with JDK v1.7.0_91
        +1 checkstyle 0m 31s trunk passed
        +1 mvnsite 1m 4s trunk passed
        +1 mvneclipse 0m 27s trunk passed
        +1 findbugs 2m 28s trunk passed
        +1 javadoc 0m 59s trunk passed with JDK v1.8.0_66
        +1 javadoc 3m 11s trunk passed with JDK v1.7.0_91
        +1 mvninstall 0m 54s the patch passed
        +1 compile 2m 13s the patch passed with JDK v1.8.0_66
        +1 javac 2m 13s the patch passed
        +1 compile 2m 9s the patch passed with JDK v1.7.0_91
        +1 javac 2m 9s the patch passed
        -1 checkstyle 0m 31s Patch generated 4 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 315, now 318).
        +1 mvnsite 1m 0s the patch passed
        +1 mvneclipse 0m 22s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        +1 findbugs 2m 48s the patch passed
        -1 javadoc 0m 25s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
        +1 javadoc 3m 9s the patch passed with JDK v1.7.0_91
        +1 unit 0m 22s hadoop-yarn-api in the patch passed with JDK v1.8.0_66.
        +1 unit 60m 6s hadoop-yarn-server-resourcemanager in the patch passed with JDK v1.8.0_66.
        +1 unit 0m 23s hadoop-yarn-api in the patch passed with JDK v1.7.0_91.
        -1 unit 61m 21s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
        +1 asflicense 0m 19s Patch does not generate ASF License warnings.
        157m 54s



        Reason Tests
        JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:0ca8df7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12781031/YARN-4438.6.patch
        JIRA Issue YARN-4438
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 8794fd7709f9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 52b7757
        Default Java 1.7.0_91
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10194/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/10194/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/10194/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10194/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
        JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10194/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn
        Max memory used 76MB
        Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/10194/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 7m 30s trunk passed +1 compile 1m 42s trunk passed with JDK v1.8.0_66 +1 compile 2m 3s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 31s trunk passed +1 mvnsite 1m 4s trunk passed +1 mvneclipse 0m 27s trunk passed +1 findbugs 2m 28s trunk passed +1 javadoc 0m 59s trunk passed with JDK v1.8.0_66 +1 javadoc 3m 11s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 54s the patch passed +1 compile 2m 13s the patch passed with JDK v1.8.0_66 +1 javac 2m 13s the patch passed +1 compile 2m 9s the patch passed with JDK v1.7.0_91 +1 javac 2m 9s the patch passed -1 checkstyle 0m 31s Patch generated 4 new checkstyle issues in hadoop-yarn-project/hadoop-yarn (total was 315, now 318). +1 mvnsite 1m 0s the patch passed +1 mvneclipse 0m 22s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 48s the patch passed -1 javadoc 0m 25s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. +1 javadoc 3m 9s the patch passed with JDK v1.7.0_91 +1 unit 0m 22s hadoop-yarn-api in the patch passed with JDK v1.8.0_66. +1 unit 60m 6s hadoop-yarn-server-resourcemanager in the patch passed with JDK v1.8.0_66. +1 unit 0m 23s hadoop-yarn-api in the patch passed with JDK v1.7.0_91. -1 unit 61m 21s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 19s Patch does not generate ASF License warnings. 157m 54s Reason Tests JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12781031/YARN-4438.6.patch JIRA Issue YARN-4438 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 8794fd7709f9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 52b7757 Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/10194/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/10194/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/10194/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/10194/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/10194/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn Max memory used 76MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/10194/console This message was automatically generated.
        Hide
        xgong Xuan Gong added a comment -

        +1 lgtm. Checking this in.

        Show
        xgong Xuan Gong added a comment - +1 lgtm. Checking this in.
        Hide
        xgong Xuan Gong added a comment -

        Committed into trunk/branch-2. Thanks, Jian. And thanks for the review, Karthik

        Show
        xgong Xuan Gong added a comment - Committed into trunk/branch-2. Thanks, Jian. And thanks for the review, Karthik
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-trunk-Commit #9071 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9071/)
        YARN-4438. Implement RM leader election with curator. Contributed by (xgong: rev 89022f8d4bac0e9d0b848fd91e9c4d700fe1cdbe)

        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMHA.java
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/LeaderElectorService.java
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
        • hadoop-yarn-project/CHANGES.txt
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestLeaderElectorService.java
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9071 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9071/ ) YARN-4438 . Implement RM leader election with curator. Contributed by (xgong: rev 89022f8d4bac0e9d0b848fd91e9c4d700fe1cdbe) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMHA.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/LeaderElectorService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestLeaderElectorService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
        Hide
        ajisakaa Akira Ajisaka added a comment -

        Hi Xuan Gong and Jian He,

        -1 javadoc

        please do not ignore the -1. mvn package -Pdist -DskipTests fails after this issue. Would you fix it?

        Show
        ajisakaa Akira Ajisaka added a comment - Hi Xuan Gong and Jian He , -1 javadoc please do not ignore the -1. mvn package -Pdist -DskipTests fails after this issue. Would you fix it?
        Hide
        jianhe Jian He added a comment -

        Akira Ajisaka, I think it's complaining that the newly added class requires a javadoc. For user-facing API, definitely. But for an internal use class, I don't think that's a must-have thing based on our current code base. Anyway, I can address that in YARN-4559.

        Show
        jianhe Jian He added a comment - Akira Ajisaka , I think it's complaining that the newly added class requires a javadoc. For user-facing API, definitely. But for an internal use class, I don't think that's a must-have thing based on our current code base. Anyway, I can address that in YARN-4559 .
        Hide
        stevel@apache.org Steve Loughran added a comment -

        It's complaining that the javadocs are invalid: Java 8 javadoc is a lot less forgiving.

        This needs to be reverted or fixed, as it is breaking jenkings as it was clearly checked in without looking at the "hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66." bit of the Yetus report.

        Please: Fix this with a one line emergency patch or revert the change. Otherwise I'll do the rollback. Sorry

        Show
        stevel@apache.org Steve Loughran added a comment - It's complaining that the javadocs are invalid: Java 8 javadoc is a lot less forgiving. This needs to be reverted or fixed, as it is breaking jenkings as it was clearly checked in without looking at the "hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66." bit of the Yetus report. Please: Fix this with a one line emergency patch or revert the change. Otherwise I'll do the rollback. Sorry
        Hide
        jianhe Jian He added a comment -

        Akira Ajisaka, actually, what I said is about checkstyle. For the javadoc -1. I couldn't find which part of it is related this jira.
        https://builds.apache.org/job/PreCommit-YARN-Build/10194/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt

        And it's passing on my local machine. Is that failing on your end ?

        Show
        jianhe Jian He added a comment - Akira Ajisaka , actually, what I said is about checkstyle. For the javadoc -1. I couldn't find which part of it is related this jira. https://builds.apache.org/job/PreCommit-YARN-Build/10194/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt And it's passing on my local machine. Is that failing on your end ?
        Hide
        stevel@apache.org Steve Loughran added a comment -

        Ignore everything I'm saying ... I'm confused now, and if there is something up, I don't think it's this patch. Let me do a full java8 build locally

        Show
        stevel@apache.org Steve Loughran added a comment - Ignore everything I'm saying ... I'm confused now, and if there is something up, I don't think it's this patch. Let me do a full java8 build locally
        Hide
        stevel@apache.org Steve Loughran added a comment -

        Everyone: yes, the build fails in javadocs. But I don't think it's related to this. Let me see what it takes to fix.

        Show
        stevel@apache.org Steve Loughran added a comment - Everyone: yes, the build fails in javadocs. But I don't think it's related to this. Let me see what it takes to fix.
        Hide
        stevel@apache.org Steve Loughran added a comment -

        OK. I've got a on-line patch YARN-4567 covering the javadocs.

        I don't believe the javadocs have anything to do with this patch, so ignore our complaints and please accept my apology for even complaining. Something may have changed —but it wasn't this patch. Jenkins itself?

        Show
        stevel@apache.org Steve Loughran added a comment - OK. I've got a on-line patch YARN-4567 covering the javadocs. I don't believe the javadocs have anything to do with this patch, so ignore our complaints and please accept my apology for even complaining. Something may have changed —but it wasn't this patch. Jenkins itself?
        Hide
        jianhe Jian He added a comment -

        no problem, Steve, thanks for spending time on investigating this !

        Show
        jianhe Jian He added a comment - no problem, Steve, thanks for spending time on investigating this !
        Hide
        ajisakaa Akira Ajisaka added a comment -

        Thanks Steve!

        Show
        ajisakaa Akira Ajisaka added a comment - Thanks Steve!
        Hide
        djp Junping Du added a comment -

        Per discussion in YARN-5709, I have merge the patch (apply clean except CHANGES.txt where we removed later) to branch-2.8.

        Show
        djp Junping Du added a comment - Per discussion in YARN-5709 , I have merge the patch (apply clean except CHANGES.txt where we removed later) to branch-2.8.

          People

          • Assignee:
            jianhe Jian He
            Reporter:
            jianhe Jian He
          • Votes:
            1 Vote for this issue
            Watchers:
            18 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development