Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-6959

RM may allocate wrong AM Container for new attempt

    Details

    • Target Version/s:
    • Hadoop Flags:
      Reviewed
    • Release Note:
      ResourceManager will now record ResourceRequests from different attempts into different objects.
    • Flags:
      Patch, Important

      Description

      Issue Summary:
      Previous attempt ResourceRequest may be recorded into current attempt ResourceRequests. These mis-recorded ResourceRequests may confuse AM Container Request and Allocation for current attempt.

      Issue Pipeline:

      // Executing precondition check for the incoming attempt id.
      ApplicationMasterService.allocate() ->
      
      scheduler.allocate(attemptId, ask, ...) ->
      
      // Previous precondition check for the attempt id may be outdated here, 
      // i.e. the currentAttempt may not be the corresponding attempt of the attemptId.
      // Such as the attempt id is corresponding to the previous attempt.
      currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
      
      // Previous attempt ResourceRequest may be recorded into current attempt ResourceRequests
      currentAttempt.updateResourceRequests(ask) ->
      
      // RM may allocate wrong AM Container for the current attempt, because its ResourceRequests
      // may come from previous attempt which can be any ResourceRequests previous AM asked
      // and there is not matching logic for the original AM Container ResourceRequest and 
      // the returned amContainerAllocation below.
      AMContainerAllocatedTransition.transition(...) ->
      amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
      

      Patch Correctness:
      Because after this Patch, RM will definitely record ResourceRequests from different attempt into different objects of SchedulerApplicationAttempt.AppSchedulingInfo.
      So, even if RM still record ResourceRequests from old attempt at any time, these ResourceRequests will be recorded in old AppSchedulingInfo object which will not impact current attempt's resource requests and allocation.

      Concerns:
      The getApplicationAttempt function in AbstractYarnScheduler is so confusing, we should better rename it to getCurrentApplicationAttempt. And reconsider whether there are any other bugs related to getApplicationAttempt.

      1. YARN-6959-branch-2.8.002.patch
        11 kB
        Yuqi Wang
      2. YARN-6959-branch-2.8.001.patch
        11 kB
        Yuqi Wang
      3. YARN-6959-branch-2.7.006.patch
        11 kB
        Yuqi Wang
      4. YARN-6959-branch-2.7.005.patch
        11 kB
        Yuqi Wang
      5. YARN-6959.yarn_rm.log.zip
        4.04 MB
        Yuqi Wang
      6. YARN-6959.yarn_nm.log.zip
        1.74 MB
        Yuqi Wang
      7. YARN-6959.005.patch
        11 kB
        Yuqi Wang

        Activity

        Hide
        yqwang Yuqi Wang added a comment - - edited

        Here is the log for the issue:

        application_1500967702061_2512 asked for 20GB for AM Container and 5GB for its Task Container:

        2017-07-31 20:58:49,532 INFO [Container Monitor] org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree container_e71_1500967702061_2512_01_000001 for container-id container_e71_1500967702061_2512_01_000001: 307.8 MB of 20 GB physical memory used; 1.2 GB of 30 GB virtual memory used
        

        After its first attempt failed, the second attempt was submitted; however, NM mistakenly believed the AM Container was 5GB:

        2017-07-31 21:29:46,219 INFO [Container Monitor] org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree container_e71_1500967702061_2512_02_000001 for container-id container_e71_1500967702061_2512_02_000001: 352.5 MB of 5 GB physical memory used; 1.4 GB of 7.5 GB virtual memory used
        
        

        Here is the RM log for the second attempt, which also has the InvalidStateTransitonException: Invalid event: CONTAINER_ALLOCATED at ALLOCATED_SAVING:

        2017-07-31 21:29:38,510 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1500967702061_2512 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@57fbb4f5, leaf-queue: prod-new #user-pending-applications: 0 #user-active-applications: 6 #queue-pending-applications: 0 #queue-active-applications: 6
        2017-07-31 21:29:38,510 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1500967702061_2512_000002 to scheduler from user hadoop in queue prod-new
        2017-07-31 21:29:38,514 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1500967702061_2512_000002 State change from SUBMITTED to SCHEDULED
        
        2017-07-31 21:29:38,517 INFO [Thread-13] org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_e71_1500967702061_2512_02_000001 Container Transitioned from NEW to ALLOCATED
        2017-07-31 21:29:38,517 INFO [Thread-13] org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hadoop	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1500967702061_2512	CONTAINERID=container_e71_1500967702061_2512_02_000001
        2017-07-31 21:29:38,517 INFO [Thread-13] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1500967702061_2512_000002 container=Container: [ContainerId: container_e71_1500967702061_2512_02_000001, NodeId: BN2APS0A98AEA0:10025, NodeHttpAddress: Proxy5.Yarn-Prod-Bn2.BN2.ap.gbl:81/proxy/nodemanager/BN2APS0A98AEA0/8042, Resource: <memory:5120, vCores:1, ports:null>, Priority: 1, Token: null, ] queue=prod-new: capacity=0.7, absoluteCapacity=0.7, usedResources=<memory:0, vCores:0, ports:null>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=6, numContainers=8016 clusterResource=<memory:261614761, vCores:79088, ports:null> type=OFF_SWITCH
        2017-07-31 21:29:38,517 INFO [Thread-13] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0, ports:null> cluster=<memory:261614761, vCores:79088, ports:null>
        2017-07-31 21:29:38,517 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : BN2APS0A98AEA0:10025 for container : container_e71_1500967702061_2512_02_000001
        2017-07-31 21:29:38,517 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_e71_1500967702061_2512_02_000001 Container Transitioned from ALLOCATED to ACQUIRED
        2017-07-31 21:29:38,517 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1500967702061_2512_000002
        2017-07-31 21:29:38,517  LOP-998291496]-[download]-[0@1]-[application_1501027078051_3009],prod-new,null,null,-1," for attrs weka.core.FastVector@789038c6
        2017-07-31 21:29:38,517 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1500967702061_2512 AttemptId: appattempt_1500967702061_2512_000002 MasterContainer: Container: [ContainerId: container_e71_1500967702061_2512_02_000001, NodeId: BN2APS0A98AEA0:10025, NodeHttpAddress: Proxy5.Yarn-Prod-Bn2.BN2.ap.gbl:81/proxy/nodemanager/BN2APS0A98AEA0/8042, Resource: <memory:5120, vCores:1, ports:null>, Priority: 1, Token: Token { kind: ContainerToken, service: 10.152.174.160:10025 }, ]
        2017-07-31 21:29:38,518 INFO [Thread-13] org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_e71_1500967702061_2512_02_000002 Container Transitioned from NEW to ALLOCATED
        2017-07-31 21:29:38,518 INFO [Thread-13] org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hadoop	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1500967702061_2512	CONTAINERID=container_e71_1500967702061_2512_02_000002
        2017-07-31 21:29:38,518 INFO [Thread-13] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1500967702061_2512_000002 container=Container: [ContainerId: container_e71_1500967702061_2512_02_000002, NodeId: BN1APS0A1BAECA:10025, NodeHttpAddress: Proxy5.Yarn-Prod-Bn2.BN2.ap.gbl:81/proxy/nodemanager/BN1APS0A1BAECA/8042, Resource: <memory:5120, vCores:1, ports:null>, Priority: 1, Token: null, ] queue=prod-new: capacity=0.7, absoluteCapacity=0.7, usedResources=<memory:0, vCores:0, ports:null>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=6, numContainers=8017 clusterResource=<memory:261614761, vCores:79088, ports:null> type=OFF_SWITCH
        2017-07-31 21:29:38,518 INFO [Thread-13] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0, ports:null> cluster=<memory:261614761, vCores:79088, ports:null>
        2017-07-31 21:29:38,518 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Container container_e71_1500967702061_2512_01_001344 completed with event FINISHED
        2017-07-31 21:29:38,518 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1500967702061_2512_000002 State change from SCHEDULED to ALLOCATED_SAVING
        2017-07-31 21:29:38,518  LOP-998291496]-[download]-[0@1]-[application_1501027078051_3009],prod-new,null,null,-1," for attrs weka.core.FastVector@789038c6
        2017-07-31 21:29:38,518 INFO [Thread-13] org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_e71_1500967702061_2512_02_000003 Container Transitioned from NEW to ALLOCATED
        2017-07-31 21:29:38,518 INFO [Thread-13] org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hadoop	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1500967702061_2512	CONTAINERID=container_e71_1500967702061_2512_02_000003
        2017-07-31 21:29:38,518 ERROR [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Can't handle this event at current state
        org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: CONTAINER_ALLOCATED at ALLOCATED_SAVING
        	at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
        	at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
        	at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
        	at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:808)
        	at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:106)
        	at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:947)
        	at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:928)
        	at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
        	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
        at java.lang.Thread.run(Thread.java:745)
        
        Show
        yqwang Yuqi Wang added a comment - - edited Here is the log for the issue: application_1500967702061_2512 asked for 20GB for AM Container and 5GB for its Task Container: 2017-07-31 20:58:49,532 INFO [Container Monitor] org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree container_e71_1500967702061_2512_01_000001 for container-id container_e71_1500967702061_2512_01_000001: 307.8 MB of 20 GB physical memory used; 1.2 GB of 30 GB virtual memory used After its first attempt failed, the second attempt was submitted; however, NM mistakenly believed the AM Container was 5GB: 2017-07-31 21:29:46,219 INFO [Container Monitor] org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree container_e71_1500967702061_2512_02_000001 for container-id container_e71_1500967702061_2512_02_000001: 352.5 MB of 5 GB physical memory used; 1.4 GB of 7.5 GB virtual memory used Here is the RM log for the second attempt, which also has the InvalidStateTransitonException: Invalid event: CONTAINER_ALLOCATED at ALLOCATED_SAVING: 2017-07-31 21:29:38,510 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1500967702061_2512 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@57fbb4f5, leaf-queue: prod- new #user-pending-applications: 0 #user-active-applications: 6 #queue-pending-applications: 0 #queue-active-applications: 6 2017-07-31 21:29:38,510 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1500967702061_2512_000002 to scheduler from user hadoop in queue prod- new 2017-07-31 21:29:38,514 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1500967702061_2512_000002 State change from SUBMITTED to SCHEDULED 2017-07-31 21:29:38,517 INFO [ Thread -13] org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_e71_1500967702061_2512_02_000001 Container Transitioned from NEW to ALLOCATED 2017-07-31 21:29:38,517 INFO [ Thread -13] org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hadoop OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1500967702061_2512 CONTAINERID=container_e71_1500967702061_2512_02_000001 2017-07-31 21:29:38,517 INFO [ Thread -13] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1500967702061_2512_000002 container=Container: [ContainerId: container_e71_1500967702061_2512_02_000001, NodeId: BN2APS0A98AEA0:10025, NodeHttpAddress: Proxy5.Yarn-Prod-Bn2.BN2.ap.gbl:81/proxy/nodemanager/BN2APS0A98AEA0/8042, Resource: <memory:5120, vCores:1, ports: null >, Priority: 1, Token: null , ] queue=prod- new : capacity=0.7, absoluteCapacity=0.7, usedResources=<memory:0, vCores:0, ports: null >, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=6, numContainers=8016 clusterResource=<memory:261614761, vCores:79088, ports: null > type=OFF_SWITCH 2017-07-31 21:29:38,517 INFO [ Thread -13] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0, ports: null > cluster=<memory:261614761, vCores:79088, ports: null > 2017-07-31 21:29:38,517 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : BN2APS0A98AEA0:10025 for container : container_e71_1500967702061_2512_02_000001 2017-07-31 21:29:38,517 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_e71_1500967702061_2512_02_000001 Container Transitioned from ALLOCATED to ACQUIRED 2017-07-31 21:29:38,517 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1500967702061_2512_000002 2017-07-31 21:29:38,517 LOP-998291496]-[download]-[0@1]-[application_1501027078051_3009],prod- new , null , null ,-1," for attrs weka.core.FastVector@789038c6 2017-07-31 21:29:38,517 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1500967702061_2512 AttemptId: appattempt_1500967702061_2512_000002 MasterContainer: Container: [ContainerId: container_e71_1500967702061_2512_02_000001, NodeId: BN2APS0A98AEA0:10025, NodeHttpAddress: Proxy5.Yarn-Prod-Bn2.BN2.ap.gbl:81/proxy/nodemanager/BN2APS0A98AEA0/8042, Resource: <memory:5120, vCores:1, ports: null >, Priority: 1, Token: Token { kind: ContainerToken, service: 10.152.174.160:10025 }, ] 2017-07-31 21:29:38,518 INFO [ Thread -13] org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_e71_1500967702061_2512_02_000002 Container Transitioned from NEW to ALLOCATED 2017-07-31 21:29:38,518 INFO [ Thread -13] org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hadoop OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1500967702061_2512 CONTAINERID=container_e71_1500967702061_2512_02_000002 2017-07-31 21:29:38,518 INFO [ Thread -13] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1500967702061_2512_000002 container=Container: [ContainerId: container_e71_1500967702061_2512_02_000002, NodeId: BN1APS0A1BAECA:10025, NodeHttpAddress: Proxy5.Yarn-Prod-Bn2.BN2.ap.gbl:81/proxy/nodemanager/BN1APS0A1BAECA/8042, Resource: <memory:5120, vCores:1, ports: null >, Priority: 1, Token: null , ] queue=prod- new : capacity=0.7, absoluteCapacity=0.7, usedResources=<memory:0, vCores:0, ports: null >, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=6, numContainers=8017 clusterResource=<memory:261614761, vCores:79088, ports: null > type=OFF_SWITCH 2017-07-31 21:29:38,518 INFO [ Thread -13] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0, ports: null > cluster=<memory:261614761, vCores:79088, ports: null > 2017-07-31 21:29:38,518 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Container container_e71_1500967702061_2512_01_001344 completed with event FINISHED 2017-07-31 21:29:38,518 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1500967702061_2512_000002 State change from SCHEDULED to ALLOCATED_SAVING 2017-07-31 21:29:38,518 LOP-998291496]-[download]-[0@1]-[application_1501027078051_3009],prod- new , null , null ,-1," for attrs weka.core.FastVector@789038c6 2017-07-31 21:29:38,518 INFO [ Thread -13] org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_e71_1500967702061_2512_02_000003 Container Transitioned from NEW to ALLOCATED 2017-07-31 21:29:38,518 INFO [ Thread -13] org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hadoop OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1500967702061_2512 CONTAINERID=container_e71_1500967702061_2512_02_000003 2017-07-31 21:29:38,518 ERROR [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Can't handle this event at current state org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: CONTAINER_ALLOCATED at ALLOCATED_SAVING at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:808) at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:106) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:947) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:928) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110) at java.lang. Thread .run( Thread .java:745)
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 18m 43s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
              trunk Compile Tests
        -1 mvninstall 0m 12s root in trunk failed.
        +1 compile 0m 47s trunk passed with JDK v1.8.0_144
        -1 compile 0m 11s hadoop-yarn-server-resourcemanager in trunk failed with JDK v1.7.0_131.
        +1 checkstyle 0m 11s trunk passed
        -1 mvnsite 0m 13s hadoop-yarn-server-resourcemanager in trunk failed.
        -1 findbugs 0m 11s hadoop-yarn-server-resourcemanager in trunk failed.
        +1 javadoc 0m 25s trunk passed with JDK v1.8.0_144
        -1 javadoc 0m 11s hadoop-yarn-server-resourcemanager in trunk failed with JDK v1.7.0_131.
              Patch Compile Tests
        -1 mvninstall 0m 10s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 compile 0m 42s the patch passed with JDK v1.8.0_144
        +1 javac 0m 42s the patch passed
        -1 compile 0m 10s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131.
        -1 javac 0m 10s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131.
        +1 checkstyle 0m 10s the patch passed
        -1 mvnsite 0m 12s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 whitespace 0m 0s The patch has no whitespace issues.
        -1 findbugs 0m 9s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 javadoc 0m 23s the patch passed with JDK v1.8.0_144
        -1 javadoc 0m 11s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131.
              Other Tests
        -1 unit 0m 11s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131.
        +1 asflicense 0m 23s The patch does not generate ASF License warnings.
        74m 35s



        Reason Tests
        JDK v1.8.0_144 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
        JDK v1.8.0_144 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands
          org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore
          org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA
          org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA
          org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:67e87c9
        JIRA Issue YARN-6959
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880611/YARN-6959.001.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 30c57d319610 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 46b7054
        Default Java 1.7.0_131
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_144 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131
        mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/branch-mvninstall-root.txt
        compile https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/branch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt
        mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        findbugs https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt
        mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        compile https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt
        javac https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt
        mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        findbugs https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/patch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt
        JDK v1.7.0_131 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16733/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/16733/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 18m 43s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.       trunk Compile Tests -1 mvninstall 0m 12s root in trunk failed. +1 compile 0m 47s trunk passed with JDK v1.8.0_144 -1 compile 0m 11s hadoop-yarn-server-resourcemanager in trunk failed with JDK v1.7.0_131. +1 checkstyle 0m 11s trunk passed -1 mvnsite 0m 13s hadoop-yarn-server-resourcemanager in trunk failed. -1 findbugs 0m 11s hadoop-yarn-server-resourcemanager in trunk failed. +1 javadoc 0m 25s trunk passed with JDK v1.8.0_144 -1 javadoc 0m 11s hadoop-yarn-server-resourcemanager in trunk failed with JDK v1.7.0_131.       Patch Compile Tests -1 mvninstall 0m 10s hadoop-yarn-server-resourcemanager in the patch failed. +1 compile 0m 42s the patch passed with JDK v1.8.0_144 +1 javac 0m 42s the patch passed -1 compile 0m 10s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131. -1 javac 0m 10s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131. +1 checkstyle 0m 10s the patch passed -1 mvnsite 0m 12s hadoop-yarn-server-resourcemanager in the patch failed. +1 whitespace 0m 0s The patch has no whitespace issues. -1 findbugs 0m 9s hadoop-yarn-server-resourcemanager in the patch failed. +1 javadoc 0m 23s the patch passed with JDK v1.8.0_144 -1 javadoc 0m 11s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131.       Other Tests -1 unit 0m 11s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131. +1 asflicense 0m 23s The patch does not generate ASF License warnings. 74m 35s Reason Tests JDK v1.8.0_144 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler JDK v1.8.0_144 Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands   org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore   org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA   org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA   org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA Subsystem Report/Notes Docker Image:yetus/hadoop:67e87c9 JIRA Issue YARN-6959 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880611/YARN-6959.001.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 30c57d319610 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 46b7054 Default Java 1.7.0_131 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_144 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/branch-mvninstall-root.txt compile https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/branch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt findbugs https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt compile https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt javac https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt findbugs https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/patch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/16733/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt JDK v1.7.0_131 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16733/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/16733/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 23s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
              trunk Compile Tests
        +1 mvninstall 16m 31s trunk passed
        +1 compile 0m 44s trunk passed
        +1 checkstyle 0m 32s trunk passed
        +1 mvnsite 0m 44s trunk passed
        +1 findbugs 1m 21s trunk passed
        +1 javadoc 0m 26s trunk passed
              Patch Compile Tests
        +1 mvninstall 0m 42s the patch passed
        +1 compile 0m 41s the patch passed
        +1 javac 0m 41s the patch passed
        -0 checkstyle 0m 32s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 2 new + 203 unchanged - 0 fixed = 205 total (was 203)
        +1 mvnsite 0m 43s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 22s the patch passed
        +1 javadoc 0m 25s the patch passed
              Other Tests
        -1 unit 42m 39s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 asflicense 0m 16s The patch does not generate ASF License warnings.
        69m 27s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
          hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:14b5c93
        JIRA Issue YARN-6959
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880611/YARN-6959.001.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux fbb953134c75 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 46b7054
        Default Java 1.8.0_131
        findbugs v3.1.0-RC1
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16734/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/16734/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16734/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/16734/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 23s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.       trunk Compile Tests +1 mvninstall 16m 31s trunk passed +1 compile 0m 44s trunk passed +1 checkstyle 0m 32s trunk passed +1 mvnsite 0m 44s trunk passed +1 findbugs 1m 21s trunk passed +1 javadoc 0m 26s trunk passed       Patch Compile Tests +1 mvninstall 0m 42s the patch passed +1 compile 0m 41s the patch passed +1 javac 0m 41s the patch passed -0 checkstyle 0m 32s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 2 new + 203 unchanged - 0 fixed = 205 total (was 203) +1 mvnsite 0m 43s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 22s the patch passed +1 javadoc 0m 25s the patch passed       Other Tests -1 unit 42m 39s hadoop-yarn-server-resourcemanager in the patch failed. +1 asflicense 0m 16s The patch does not generate ASF License warnings. 69m 27s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue YARN-6959 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880611/YARN-6959.001.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux fbb953134c75 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 46b7054 Default Java 1.8.0_131 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16734/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/16734/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16734/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/16734/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 17s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
              trunk Compile Tests
        +1 mvninstall 13m 58s trunk passed
        +1 compile 0m 35s trunk passed
        +1 checkstyle 0m 31s trunk passed
        +1 mvnsite 0m 36s trunk passed
        +1 findbugs 1m 0s trunk passed
        +1 javadoc 0m 21s trunk passed
              Patch Compile Tests
        +1 mvninstall 0m 33s the patch passed
        +1 compile 0m 31s the patch passed
        +1 javac 0m 31s the patch passed
        -0 checkstyle 0m 28s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 396 unchanged - 1 fixed = 397 total (was 397)
        +1 mvnsite 0m 35s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 8s the patch passed
        +1 javadoc 0m 20s the patch passed
              Other Tests
        -1 unit 57m 27s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 asflicense 0m 14s The patch does not generate ASF License warnings.
        79m 51s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
        Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.TestLeaderElectorService



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:14b5c93
        JIRA Issue YARN-6959
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880645/YARN-6959.002.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux f0163abaeeff 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 0b67436
        Default Java 1.8.0_131
        findbugs v3.1.0-RC1
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16740/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/16740/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16740/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/16740/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       trunk Compile Tests +1 mvninstall 13m 58s trunk passed +1 compile 0m 35s trunk passed +1 checkstyle 0m 31s trunk passed +1 mvnsite 0m 36s trunk passed +1 findbugs 1m 0s trunk passed +1 javadoc 0m 21s trunk passed       Patch Compile Tests +1 mvninstall 0m 33s the patch passed +1 compile 0m 31s the patch passed +1 javac 0m 31s the patch passed -0 checkstyle 0m 28s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 396 unchanged - 1 fixed = 397 total (was 397) +1 mvnsite 0m 35s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 8s the patch passed +1 javadoc 0m 20s the patch passed       Other Tests -1 unit 57m 27s hadoop-yarn-server-resourcemanager in the patch failed. +1 asflicense 0m 14s The patch does not generate ASF License warnings. 79m 51s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.TestLeaderElectorService Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue YARN-6959 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880645/YARN-6959.002.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux f0163abaeeff 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 0b67436 Default Java 1.8.0_131 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16740/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/16740/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16740/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/16740/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        yqwang Yuqi Wang added a comment -

        Re-trigger QA use the same patch as 002

        Show
        yqwang Yuqi Wang added a comment - Re-trigger QA use the same patch as 002
        Hide
        yqwang Yuqi Wang added a comment -

        Adjust Style

        Show
        yqwang Yuqi Wang added a comment - Adjust Style
        Hide
        hadoopqa Hadoop QA added a comment -
        +1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 17s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
              trunk Compile Tests
        +1 mvninstall 15m 34s trunk passed
        +1 compile 0m 40s trunk passed
        +1 checkstyle 0m 30s trunk passed
        +1 mvnsite 0m 38s trunk passed
        +1 findbugs 1m 12s trunk passed
        +1 javadoc 0m 24s trunk passed
              Patch Compile Tests
        +1 mvninstall 0m 37s the patch passed
        +1 compile 0m 34s the patch passed
        +1 javac 0m 34s the patch passed
        -0 checkstyle 0m 28s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 393 unchanged - 1 fixed = 394 total (was 394)
        +1 mvnsite 0m 43s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 19s the patch passed
        +1 javadoc 0m 21s the patch passed
              Other Tests
        +1 unit 44m 28s hadoop-yarn-server-resourcemanager in the patch passed.
        +1 asflicense 0m 17s The patch does not generate ASF License warnings.
        69m 19s



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:14b5c93
        JIRA Issue YARN-6959
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880771/YARN-6959.003.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 493a53b5cebf 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 8d3fd81
        Default Java 1.8.0_131
        findbugs v3.1.0-RC1
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16765/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16765/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/16765/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       trunk Compile Tests +1 mvninstall 15m 34s trunk passed +1 compile 0m 40s trunk passed +1 checkstyle 0m 30s trunk passed +1 mvnsite 0m 38s trunk passed +1 findbugs 1m 12s trunk passed +1 javadoc 0m 24s trunk passed       Patch Compile Tests +1 mvninstall 0m 37s the patch passed +1 compile 0m 34s the patch passed +1 javac 0m 34s the patch passed -0 checkstyle 0m 28s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 393 unchanged - 1 fixed = 394 total (was 394) +1 mvnsite 0m 43s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 19s the patch passed +1 javadoc 0m 21s the patch passed       Other Tests +1 unit 44m 28s hadoop-yarn-server-resourcemanager in the patch passed. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 69m 19s Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue YARN-6959 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880771/YARN-6959.003.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 493a53b5cebf 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 8d3fd81 Default Java 1.8.0_131 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16765/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16765/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/16765/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 15s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
              trunk Compile Tests
        +1 mvninstall 14m 7s trunk passed
        +1 compile 0m 34s trunk passed
        +1 checkstyle 0m 29s trunk passed
        +1 mvnsite 0m 38s trunk passed
        +1 findbugs 1m 5s trunk passed
        +1 javadoc 0m 20s trunk passed
              Patch Compile Tests
        +1 mvninstall 0m 35s the patch passed
        +1 compile 0m 32s the patch passed
        +1 javac 0m 32s the patch passed
        +1 checkstyle 0m 26s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 390 unchanged - 4 fixed = 390 total (was 394)
        +1 mvnsite 0m 34s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 6s the patch passed
        +1 javadoc 0m 17s the patch passed
              Other Tests
        -1 unit 45m 52s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 asflicense 0m 16s The patch does not generate ASF License warnings.
        68m 23s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation
          hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:14b5c93
        JIRA Issue YARN-6959
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880777/YARN-6959.004.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 916222ef4bd3 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 8d3fd81
        Default Java 1.8.0_131
        findbugs v3.1.0-RC1
        unit https://builds.apache.org/job/PreCommit-YARN-Build/16766/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16766/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/16766/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       trunk Compile Tests +1 mvninstall 14m 7s trunk passed +1 compile 0m 34s trunk passed +1 checkstyle 0m 29s trunk passed +1 mvnsite 0m 38s trunk passed +1 findbugs 1m 5s trunk passed +1 javadoc 0m 20s trunk passed       Patch Compile Tests +1 mvninstall 0m 35s the patch passed +1 compile 0m 32s the patch passed +1 javac 0m 32s the patch passed +1 checkstyle 0m 26s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 390 unchanged - 4 fixed = 390 total (was 394) +1 mvnsite 0m 34s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 6s the patch passed +1 javadoc 0m 17s the patch passed       Other Tests -1 unit 45m 52s hadoop-yarn-server-resourcemanager in the patch failed. +1 asflicense 0m 16s The patch does not generate ASF License warnings. 68m 23s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation   hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue YARN-6959 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880777/YARN-6959.004.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 916222ef4bd3 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 8d3fd81 Default Java 1.8.0_131 findbugs v3.1.0-RC1 unit https://builds.apache.org/job/PreCommit-YARN-Build/16766/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16766/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/16766/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 19s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
              trunk Compile Tests
        +1 mvninstall 17m 24s trunk passed
        +1 compile 0m 41s trunk passed
        +1 checkstyle 0m 36s trunk passed
        +1 mvnsite 0m 46s trunk passed
        +1 findbugs 1m 20s trunk passed
        +1 javadoc 0m 28s trunk passed
              Patch Compile Tests
        +1 mvninstall 0m 42s the patch passed
        +1 compile 0m 38s the patch passed
        +1 javac 0m 38s the patch passed
        +1 checkstyle 0m 32s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 391 unchanged - 4 fixed = 391 total (was 395)
        +1 mvnsite 0m 43s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 21s the patch passed
        +1 javadoc 0m 25s the patch passed
              Other Tests
        -1 unit 44m 30s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 asflicense 0m 15s The patch does not generate ASF License warnings.
        72m 9s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher
          hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:14b5c93
        JIRA Issue YARN-6959
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880790/YARN-6959.004.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 53b9e67b898a 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 55a181f
        Default Java 1.8.0_131
        findbugs v3.1.0-RC1
        unit https://builds.apache.org/job/PreCommit-YARN-Build/16767/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16767/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/16767/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 19s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       trunk Compile Tests +1 mvninstall 17m 24s trunk passed +1 compile 0m 41s trunk passed +1 checkstyle 0m 36s trunk passed +1 mvnsite 0m 46s trunk passed +1 findbugs 1m 20s trunk passed +1 javadoc 0m 28s trunk passed       Patch Compile Tests +1 mvninstall 0m 42s the patch passed +1 compile 0m 38s the patch passed +1 javac 0m 38s the patch passed +1 checkstyle 0m 32s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 391 unchanged - 4 fixed = 391 total (was 395) +1 mvnsite 0m 43s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 21s the patch passed +1 javadoc 0m 25s the patch passed       Other Tests -1 unit 44m 30s hadoop-yarn-server-resourcemanager in the patch failed. +1 asflicense 0m 15s The patch does not generate ASF License warnings. 72m 9s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue YARN-6959 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880790/YARN-6959.004.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 53b9e67b898a 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 55a181f Default Java 1.8.0_131 findbugs v3.1.0-RC1 unit https://builds.apache.org/job/PreCommit-YARN-Build/16767/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16767/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/16767/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        yqwang Yuqi Wang added a comment -

        UT fail is not due to the patch. Re-trigger Jenkins since the UT is not stable

        Show
        yqwang Yuqi Wang added a comment - UT fail is not due to the patch. Re-trigger Jenkins since the UT is not stable
        Hide
        hadoopqa Hadoop QA added a comment -
        +1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 18s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
              trunk Compile Tests
        +1 mvninstall 14m 35s trunk passed
        +1 compile 0m 35s trunk passed
        +1 checkstyle 0m 31s trunk passed
        +1 mvnsite 0m 36s trunk passed
        +1 findbugs 0m 59s trunk passed
        +1 javadoc 0m 22s trunk passed
              Patch Compile Tests
        +1 mvninstall 0m 33s the patch passed
        +1 compile 0m 31s the patch passed
        +1 javac 0m 31s the patch passed
        +1 checkstyle 0m 27s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 390 unchanged - 4 fixed = 390 total (was 394)
        +1 mvnsite 0m 34s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 7s the patch passed
        +1 javadoc 0m 19s the patch passed
              Other Tests
        +1 unit 43m 33s hadoop-yarn-server-resourcemanager in the patch passed.
        +1 asflicense 0m 16s The patch does not generate ASF License warnings.
        66m 32s



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:14b5c93
        JIRA Issue YARN-6959
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880793/YARN-6959.005.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux d4cc7d8b7fe4 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 55a181f
        Default Java 1.8.0_131
        findbugs v3.1.0-RC1
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16768/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/16768/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 18s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       trunk Compile Tests +1 mvninstall 14m 35s trunk passed +1 compile 0m 35s trunk passed +1 checkstyle 0m 31s trunk passed +1 mvnsite 0m 36s trunk passed +1 findbugs 0m 59s trunk passed +1 javadoc 0m 22s trunk passed       Patch Compile Tests +1 mvninstall 0m 33s the patch passed +1 compile 0m 31s the patch passed +1 javac 0m 31s the patch passed +1 checkstyle 0m 27s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 390 unchanged - 4 fixed = 390 total (was 394) +1 mvnsite 0m 34s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 7s the patch passed +1 javadoc 0m 19s the patch passed       Other Tests +1 unit 43m 33s hadoop-yarn-server-resourcemanager in the patch passed. +1 asflicense 0m 16s The patch does not generate ASF License warnings. 66m 32s Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue YARN-6959 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12880793/YARN-6959.005.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux d4cc7d8b7fe4 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 55a181f Default Java 1.8.0_131 findbugs v3.1.0-RC1 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16768/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/16768/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        yqwang Yuqi Wang added a comment -

        Daniel Templeton Jian He
        Please help to review this.

        Show
        yqwang Yuqi Wang added a comment - Daniel Templeton Jian He Please help to review this.
        Hide
        jianhe Jian He added a comment -

        Yuqi Wang, thanks for the patch. one question is I'm wondering under what scenario this can happen. For each failed attempt, we remove it from the ApplicationMasterService#responseMap, and in ApplicationMasterService#allocate we will check is the attempt is in the responseMap, if not, that will block allocate into scheduler.
        Do you see this log line in ApplicationMasterService for the 1st attempt ?
        LOG.info("Unregistering app attempt : " + attemptId);

        Show
        jianhe Jian He added a comment - Yuqi Wang , thanks for the patch. one question is I'm wondering under what scenario this can happen. For each failed attempt, we remove it from the ApplicationMasterService#responseMap, and in ApplicationMasterService#allocate we will check is the attempt is in the responseMap, if not, that will block allocate into scheduler. Do you see this log line in ApplicationMasterService for the 1st attempt ? LOG.info("Unregistering app attempt : " + attemptId);
        Hide
        yqwang Yuqi Wang added a comment -

        Jian He
        Reproduce the race condition during below segment pipeline of one AM RM RPC call:

        // One AM RM RPC call
        ApplicationMasterService.allocate() {
          AllocateResponseLock lock = responseMap.get(appAttemptId);
          if (lock == null) { // MARK1: At this time, the appAttemptId is still current attempt, so the RPC call continues.
            ...
            throw new ApplicationAttemptNotFoundException();
          }
          synchronized (lock) { // MARK2: The RPC call may be blocked here for a long time
            ...
            // MARK3: During MARK1 and here, RM may switch to the new attempt. So, previous 
            // attempt ResourceRequest may be recorded into current attempt ResourceRequests 
            scheduler.allocate(attemptId, ask, ...) -> scheduler.getApplicationAttempt(attemptId)
            ...
          }
        }
        

        I saw the log you mentioned. It shows that, RM switched to the new attempt and afterwards there was still some allocate() from previous attempt came into the scheduler.
        For details, I just attached the full log in the attachment, please check.

        2017-07-31 21:29:38,351 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_e71_1500967702061_2512_01_000361 Container Transitioned from RUNNING to COMPLETED
        2017-07-31 21:29:38,351 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_e71_1500967702061_2512_01_000361 in state: COMPLETED event:FINISHED
        2017-07-31 21:29:38,351 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hadoop	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1500967702061_2512	CONTAINERID=container_e71_1500967702061_2512_01_000361
        2017-07-31 21:29:38,351 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: prod-new used=<memory:0, vCores:0, ports:null> numContainers=9349 user=hadoop user-resources=<memory:0, vCores:0, ports:null>
        2017-07-31 21:29:38,351 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_e71_1500967702061_2512_01_000361, NodeId: BN1APS0A410B91:10025, NodeHttpAddress: Proxy5.Yarn-Prod-Bn2.BN2.ap.gbl:81/proxy/nodemanager/BN1APS0A410B91/8042, Resource: <memory:5120, vCores:1, ports:null>, Priority: 1, Token: Token { kind: ContainerToken, service: 10.65.11.145:10025 }, ] queue=prod-new: capacity=0.7, absoluteCapacity=0.7, usedResources=<memory:0, vCores:0, ports:null>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=6, numContainers=9349 cluster=<memory:261614761, vCores:79088, ports:null>
        2017-07-31 21:29:38,351 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0, ports:null> cluster=<memory:261614761, vCores:79088, ports:null>
        2017-07-31 21:29:38,351 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.prod-new stats: prod-new: capacity=0.7, absoluteCapacity=0.7, usedResources=<memory:0, vCores:0, ports:null>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=6, numContainers=9349
        2017-07-31 21:29:38,351 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1500967702061_2512_000001 released container container_e71_1500967702061_2512_01_000361 on node: host: BN1APS0A410B91:10025 #containers=3 available=<memory:30977, vCores:23, ports:null> used=<memory:23552, vCores:3, ports:null> with event: FINISHED
        2017-07-31 21:29:38,353 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1500967702061_2512_000001
        2017-07-31 21:29:38,353 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1500967702061_2512_000001
        2017-07-31 21:29:38,353 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1500967702061_2512_000001 State change from FINAL_SAVING to FAILED
        2017-07-31 21:29:38,353 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 1. The max attempts is 3
        2017-07-31 21:29:38,354 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1500967702061_2512 State change from RUNNING to ACCEPTED
        2017-07-31 21:29:38,354 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1500967702061_2512_000001 is done. finalState=FAILED
        2017-07-31 21:29:38,354 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1500967702061_2512_000002
        2017-07-31 21:29:38,354 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1500967702061_2512_000002 State change from NEW to SUBMITTED
        2017-07-31 21:29:38,354 INFO [ApplicationMasterLauncher #49] org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1500967702061_2512_000001
        
        Show
        yqwang Yuqi Wang added a comment - Jian He Reproduce the race condition during below segment pipeline of one AM RM RPC call: // One AM RM RPC call ApplicationMasterService.allocate() { AllocateResponseLock lock = responseMap.get(appAttemptId); if (lock == null ) { // MARK1: At this time, the appAttemptId is still current attempt, so the RPC call continues. ... throw new ApplicationAttemptNotFoundException(); } synchronized (lock) { // MARK2: The RPC call may be blocked here for a long time ... // MARK3: During MARK1 and here, RM may switch to the new attempt. So, previous // attempt ResourceRequest may be recorded into current attempt ResourceRequests scheduler.allocate(attemptId, ask, ...) -> scheduler.getApplicationAttempt(attemptId) ... } } I saw the log you mentioned. It shows that, RM switched to the new attempt and afterwards there was still some allocate() from previous attempt came into the scheduler. For details, I just attached the full log in the attachment, please check. 2017-07-31 21:29:38,351 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_e71_1500967702061_2512_01_000361 Container Transitioned from RUNNING to COMPLETED 2017-07-31 21:29:38,351 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_e71_1500967702061_2512_01_000361 in state: COMPLETED event:FINISHED 2017-07-31 21:29:38,351 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hadoop OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1500967702061_2512 CONTAINERID=container_e71_1500967702061_2512_01_000361 2017-07-31 21:29:38,351 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: prod- new used=<memory:0, vCores:0, ports: null > numContainers=9349 user=hadoop user-resources=<memory:0, vCores:0, ports: null > 2017-07-31 21:29:38,351 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_e71_1500967702061_2512_01_000361, NodeId: BN1APS0A410B91:10025, NodeHttpAddress: Proxy5.Yarn-Prod-Bn2.BN2.ap.gbl:81/proxy/nodemanager/BN1APS0A410B91/8042, Resource: <memory:5120, vCores:1, ports: null >, Priority: 1, Token: Token { kind: ContainerToken, service: 10.65.11.145:10025 }, ] queue=prod- new : capacity=0.7, absoluteCapacity=0.7, usedResources=<memory:0, vCores:0, ports: null >, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=6, numContainers=9349 cluster=<memory:261614761, vCores:79088, ports: null > 2017-07-31 21:29:38,351 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0, ports: null > cluster=<memory:261614761, vCores:79088, ports: null > 2017-07-31 21:29:38,351 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.prod- new stats: prod- new : capacity=0.7, absoluteCapacity=0.7, usedResources=<memory:0, vCores:0, ports: null >, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=6, numContainers=9349 2017-07-31 21:29:38,351 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1500967702061_2512_000001 released container container_e71_1500967702061_2512_01_000361 on node: host: BN1APS0A410B91:10025 #containers=3 available=<memory:30977, vCores:23, ports: null > used=<memory:23552, vCores:3, ports: null > with event: FINISHED 2017-07-31 21:29:38,353 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1500967702061_2512_000001 2017-07-31 21:29:38,353 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1500967702061_2512_000001 2017-07-31 21:29:38,353 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1500967702061_2512_000001 State change from FINAL_SAVING to FAILED 2017-07-31 21:29:38,353 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 1. The max attempts is 3 2017-07-31 21:29:38,354 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1500967702061_2512 State change from RUNNING to ACCEPTED 2017-07-31 21:29:38,354 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1500967702061_2512_000001 is done. finalState=FAILED 2017-07-31 21:29:38,354 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1500967702061_2512_000002 2017-07-31 21:29:38,354 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1500967702061_2512_000002 State change from NEW to SUBMITTED 2017-07-31 21:29:38,354 INFO [ApplicationMasterLauncher #49] org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1500967702061_2512_000001
        Hide
        yqwang Yuqi Wang added a comment - - edited

        Attach RM log for this bug:

        YARN-6959.yarn_rm.log.zip
        
        Show
        yqwang Yuqi Wang added a comment - - edited Attach RM log for this bug: YARN-6959.yarn_rm.log.zip
        Hide
        yqwang Yuqi Wang added a comment -

        Anyway, as YARN-5197, executing a double check to avoid potential race condition, network issues, etc should be a best practice.

        Show
        yqwang Yuqi Wang added a comment - Anyway, as YARN-5197 , executing a double check to avoid potential race condition, network issues, etc should be a best practice.
        Hide
        jianhe Jian He added a comment -

        It's still unclear to me. For MARK2, once the lock is released, it can just proceed.

          synchronized (lock) { // MARK2: The RPC call may be blocked here for a long time
            ...
            // MARK3: During MARK1 and here, RM may switch to the new attempt. So, previous 
            // attempt ResourceRequest may be recorded into current attempt ResourceRequests 
            scheduler.allocate(attemptId, ask, ...) -> scheduler.getApplicationAttempt(attemptId)
            ...
          }
        

        From the log, I do see that the AM container size changed. Also, I see that the first AM container completed at

        2017-07-31 21:29:38,338 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_e71_1500967702061_2512_01_000001 Container Transitioned from RUNNING to COMPLETED
        

        if the AM container process had already exited, how is it possible to call allocate again.
        Can you check on NodeManager Log that if the first AM container indeed completed?
        Are you able to enable debug level log and reproduce this issue ? or reproduce the issue with a UT.

        Show
        jianhe Jian He added a comment - It's still unclear to me. For MARK2, once the lock is released, it can just proceed. synchronized (lock) { // MARK2: The RPC call may be blocked here for a long time ... // MARK3: During MARK1 and here, RM may switch to the new attempt. So, previous // attempt ResourceRequest may be recorded into current attempt ResourceRequests scheduler.allocate(attemptId, ask, ...) -> scheduler.getApplicationAttempt(attemptId) ... } From the log, I do see that the AM container size changed. Also, I see that the first AM container completed at 2017-07-31 21:29:38,338 INFO [ResourceManager Event Processor] org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_e71_1500967702061_2512_01_000001 Container Transitioned from RUNNING to COMPLETED if the AM container process had already exited, how is it possible to call allocate again. Can you check on NodeManager Log that if the first AM container indeed completed? Are you able to enable debug level log and reproduce this issue ? or reproduce the issue with a UT.
        Hide
        yqwang Yuqi Wang added a comment - - edited

        Attach NM log for this bug.

        YARN-6959.yarn_nm.log.zip
        
        Show
        yqwang Yuqi Wang added a comment - - edited Attach NM log for this bug. YARN-6959.yarn_nm.log.zip
        Hide
        yqwang Yuqi Wang added a comment -
        2017-07-31 21:29:34,047 INFO [Container Monitor] org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree container_e71_1500967702061_2512_01_000001 for container-id container_e71_1500967702061_2512_01_000001: 7.1 GB of 20 GB physical memory used; 8.5 GB of 30 GB virtual memory used
        2017-07-31 21:29:37,423 INFO [Container Monitor] org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree container_e71_1500967702061_2512_01_000001 for container-id container_e71_1500967702061_2512_01_000001: 7.1 GB of 20 GB physical memory used; 8.5 GB of 30 GB virtual memory used
        2017-07-31 21:29:38,239 WARN [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_e71_1500967702061_2512_01_000001 is : 15
        2017-07-31 21:29:38,239 WARN [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_e71_1500967702061_2512_01_000001 and exit code: 15
        ExitCodeException exitCode=15: 
        	at org.apache.hadoop.util.Shell.runCommand(Shell.java:579)
        	at org.apache.hadoop.util.Shell.run(Shell.java:490)
        	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:756)
        	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
        	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:329)
        	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:86)
        	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        	at java.lang.Thread.run(Thread.java:745)
        2017-07-31 21:29:38,239 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exception from container-launch.
        Container id: container_e71_1500967702061_2512_01_000001
        Exit code: 15
        Stack trace: ExitCodeException exitCode=15: 
        2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:579)
        2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 	at org.apache.hadoop.util.Shell.run(Shell.java:490)
        2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:756)
        2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
        2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:329)
        2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:86)
        2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 	at java.lang.Thread.run(Thread.java:745)
        2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 
        
        2017-07-31 21:29:38,241 WARN [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container exited with a non-zero exit code 15
        2017-07-31 21:29:38,241 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_e71_1500967702061_2512_01_000001 transitioned from RUNNING to EXITED_WITH_FAILURE
        2017-07-31 21:29:38,241 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_e71_1500967702061_2512_01_000001
        2017-07-31 21:29:38,331 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Cleaning Yarn container: container_id=container_e71_1500967702061_2512_01_000001
        2017-07-31 21:29:38,332 WARN [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoop	OPERATION=Container Finished - Failed	TARGET=ContainerImpl	RESULT=FAILURE	DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE	APPID=application_1500967702061_2512	CONTAINERID=container_e71_1500967702061_2512_01_000001
        2017-07-31 21:29:38,333 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_e71_1500967702061_2512_01_000001 transitioned from EXITED_WITH_FAILURE to DONE
        2017-07-31 21:29:38,333 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Removing container_e71_1500967702061_2512_01_000001 from application application_1500967702061_2512
        2017-07-31 21:29:38,333 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Considering container container_e71_1500967702061_2512_01_000001 for log-aggregation
        2017-07-31 21:29:38,333 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1500967702061_2512
        2017-07-31 21:29:38,333 INFO [AsyncDispatcher event handler] org.apache.spark.network.yarn.YarnShuffleService: Stopping container container_e71_1500967702061_2512_01_000001
        
        Show
        yqwang Yuqi Wang added a comment - 2017-07-31 21:29:34,047 INFO [Container Monitor] org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree container_e71_1500967702061_2512_01_000001 for container-id container_e71_1500967702061_2512_01_000001: 7.1 GB of 20 GB physical memory used; 8.5 GB of 30 GB virtual memory used 2017-07-31 21:29:37,423 INFO [Container Monitor] org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree container_e71_1500967702061_2512_01_000001 for container-id container_e71_1500967702061_2512_01_000001: 7.1 GB of 20 GB physical memory used; 8.5 GB of 30 GB virtual memory used 2017-07-31 21:29:38,239 WARN [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_e71_1500967702061_2512_01_000001 is : 15 2017-07-31 21:29:38,239 WARN [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_e71_1500967702061_2512_01_000001 and exit code: 15 ExitCodeException exitCode=15: at org.apache.hadoop.util.Shell.runCommand(Shell.java:579) at org.apache.hadoop.util.Shell.run(Shell.java:490) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:756) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:329) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:86) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang. Thread .run( Thread .java:745) 2017-07-31 21:29:38,239 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exception from container-launch. Container id: container_e71_1500967702061_2512_01_000001 Exit code: 15 Stack trace: ExitCodeException exitCode=15: 2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.util.Shell.runCommand(Shell.java:579) 2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.util.Shell.run(Shell.java:490) 2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:756) 2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) 2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:329) 2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:86) 2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.util.concurrent.FutureTask.run(FutureTask.java:266) 2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.lang. Thread .run( Thread .java:745) 2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 2017-07-31 21:29:38,241 WARN [ContainersLauncher #60] org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container exited with a non-zero exit code 15 2017-07-31 21:29:38,241 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_e71_1500967702061_2512_01_000001 transitioned from RUNNING to EXITED_WITH_FAILURE 2017-07-31 21:29:38,241 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_e71_1500967702061_2512_01_000001 2017-07-31 21:29:38,331 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Cleaning Yarn container: container_id=container_e71_1500967702061_2512_01_000001 2017-07-31 21:29:38,332 WARN [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoop OPERATION=Container Finished - Failed TARGET=ContainerImpl RESULT=FAILURE DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE APPID=application_1500967702061_2512 CONTAINERID=container_e71_1500967702061_2512_01_000001 2017-07-31 21:29:38,333 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_e71_1500967702061_2512_01_000001 transitioned from EXITED_WITH_FAILURE to DONE 2017-07-31 21:29:38,333 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Removing container_e71_1500967702061_2512_01_000001 from application application_1500967702061_2512 2017-07-31 21:29:38,333 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Considering container container_e71_1500967702061_2512_01_000001 for log-aggregation 2017-07-31 21:29:38,333 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1500967702061_2512 2017-07-31 21:29:38,333 INFO [AsyncDispatcher event handler] org.apache.spark.network.yarn.YarnShuffleService: Stopping container container_e71_1500967702061_2512_01_000001
        Hide
        jianhe Jian He added a comment -

        ok, the first AM container process exited then, it's impossible for it to call allocate again. I guess the root cause is different.

        Show
        jianhe Jian He added a comment - ok, the first AM container process exited then, it's impossible for it to call allocate again. I guess the root cause is different.
        Hide
        yqwang Yuqi Wang added a comment - - edited

        Jian He
        The whole pipeline was that:
        Step0. AM sent heartbeats to RM.
        Step1. AM process crashed with exitcode 15 without unregister to RM.
        Step2-a. The heartbeats sent in step0, was processing by RM between MARK1 and MARK3.
        Step2-b. NM told RM the AM container has completed.
        Step3. RM switched to the new attempt.
        Step4. RM recorded requests in the heartbeats from previous AM into current attempt.

        So, it is possible.

        Show
        yqwang Yuqi Wang added a comment - - edited Jian He The whole pipeline was that: Step0. AM sent heartbeats to RM. Step1. AM process crashed with exitcode 15 without unregister to RM. Step2-a. The heartbeats sent in step0, was processing by RM between MARK1 and MARK3. Step2-b. NM told RM the AM container has completed. Step3. RM switched to the new attempt. Step4. RM recorded requests in the heartbeats from previous AM into current attempt. So, it is possible.
        Hide
        yqwang Yuqi Wang added a comment -

        Basically, I meant that the allocate RPC call which is sent before AM process exited, caused this issue.
        Jian He, could you please reconsider it.

        Show
        yqwang Yuqi Wang added a comment - Basically, I meant that the allocate RPC call which is sent before AM process exited, caused this issue. Jian He , could you please reconsider it.
        Hide
        jianhe Jian He added a comment -

        Do you mean step0 is blocked on MARK2 until the this entire process(AM container completes -> NM reports to RM -> RM process a series of events -> and finally a new Attempt gets added in scheduler) is completed?
        Question is why is step0 be blocked for so long ? there's no contention to grab the lock if I understand correctly.

        Show
        jianhe Jian He added a comment - Do you mean step0 is blocked on MARK2 until the this entire process(AM container completes -> NM reports to RM -> RM process a series of events -> and finally a new Attempt gets added in scheduler) is completed? Question is why is step0 be blocked for so long ? there's no contention to grab the lock if I understand correctly.
        Hide
        yqwang Yuqi Wang added a comment -

        I meant heartbeats from Step0 is blocked between MARK1 and MARK3 (i.e. blocked until Step3. RM switched to the new attempt.).
        So, it may be blocked in MARK2, or may be blocked in some other places between MARK1 and MARK3.

        And the RPC time before MARK1 cannot be ignored, and it can run parallel with the process (AM container completes -> NM reports to RM -> RM process a series of events).

        I have not figure out which account for the largest time yet.
        However, anyway, there is a race condition.

        Show
        yqwang Yuqi Wang added a comment - I meant heartbeats from Step0 is blocked between MARK1 and MARK3 (i.e. blocked until Step3. RM switched to the new attempt.). So, it may be blocked in MARK2, or may be blocked in some other places between MARK1 and MARK3. And the RPC time before MARK1 cannot be ignored, and it can run parallel with the process (AM container completes -> NM reports to RM -> RM process a series of events). I have not figure out which account for the largest time yet. However, anyway, there is a race condition.
        Hide
        jianhe Jian He added a comment -

        Yes, I agree It is possible, but may happen rarely as NM and RM also has the heartbeat interval. The fix is fine, just wondering if there are other issues behind this, otherwise, the fix will just hide other issues, if any.
        Btw, did this happen in a real cluster ?

        Show
        jianhe Jian He added a comment - Yes, I agree It is possible, but may happen rarely as NM and RM also has the heartbeat interval. The fix is fine, just wondering if there are other issues behind this, otherwise, the fix will just hide other issues, if any. Btw, did this happen in a real cluster ?
        Hide
        yqwang Yuqi Wang added a comment -

        Yes, it is very rare. It is the first time I have seen in our large cluster.

        The log was from our production cluster.
        We have very larger cluster (>50k nodes) which serves daily batch jobs and long running services from our customer in Microsoft.

        Our customer complains that their job just fail without any effective retry/attempts.
        Because as the log showed, the AM container size decreased from 20GB to 5GB, so the new attempt will be definitively fail since pmem limitation is enabled.

        As I said in this JIRA Description:
        Concerns:
        The getApplicationAttempt function in AbstractYarnScheduler is so confusing, we should better rename it to getCurrentApplicationAttempt. And reconsider whether there are any other bugs related to getApplicationAttempt.

        Show
        yqwang Yuqi Wang added a comment - Yes, it is very rare. It is the first time I have seen in our large cluster. The log was from our production cluster. We have very larger cluster (>50k nodes) which serves daily batch jobs and long running services from our customer in Microsoft. Our customer complains that their job just fail without any effective retry/attempts. Because as the log showed, the AM container size decreased from 20GB to 5GB, so the new attempt will be definitively fail since pmem limitation is enabled. As I said in this JIRA Description: Concerns: The getApplicationAttempt function in AbstractYarnScheduler is so confusing, we should better rename it to getCurrentApplicationAttempt. And reconsider whether there are any other bugs related to getApplicationAttempt.
        Hide
        jianhe Jian He added a comment -

        we should better rename it to getCurrentApplicationAttemp

        Yep, would you like to rename it in this patch ?

        Show
        jianhe Jian He added a comment - we should better rename it to getCurrentApplicationAttemp Yep, would you like to rename it in this patch ?
        Hide
        yqwang Yuqi Wang added a comment - - edited

        As this issue, other places which call getApplicationAttempt may also want to get the attempt specified in the arg instead of current attempt.
        And if I just change getApplicationAttempt to getCurrentApplicationAttempt, it is more likely to hide the bugs.
        I think only for this JIRA, I will not touch getApplicationAttempt until we have confirmed all places used getApplicationAttempt is bugfree.

        Show
        yqwang Yuqi Wang added a comment - - edited As this issue, other places which call getApplicationAttempt may also want to get the attempt specified in the arg instead of current attempt. And if I just change getApplicationAttempt to getCurrentApplicationAttempt, it is more likely to hide the bugs. I think only for this JIRA, I will not touch getApplicationAttempt until we have confirmed all places used getApplicationAttempt is bugfree.
        Hide
        yqwang Yuqi Wang added a comment -

        The renaming can be made in next hadoop version.

        Show
        yqwang Yuqi Wang added a comment - The renaming can be made in next hadoop version.
        Hide
        jianhe Jian He added a comment -

        And if I just change getApplicationAttempt to getCurrentApplicationAttempt, it is more likely to hide the bugs.

        Don't get you, it's just a rename refactor? how will it add/hide bugs?
        Anyway, looks like a bunch of callers, better not do, as this will affect other activities going on.
        Would you mind adding a comment on the getApplicationAttempt method to explain its behavior ?

        Show
        jianhe Jian He added a comment - And if I just change getApplicationAttempt to getCurrentApplicationAttempt, it is more likely to hide the bugs. Don't get you, it's just a rename refactor? how will it add/hide bugs? Anyway, looks like a bunch of callers, better not do, as this will affect other activities going on. Would you mind adding a comment on the getApplicationAttempt method to explain its behavior ?
        Hide
        yqwang Yuqi Wang added a comment - - edited

        I already added a comment on it in the patch:
        // TODO: Rename it to getCurrentApplicationAttempt

        I think it is clear. What do you think about it?

        Show
        yqwang Yuqi Wang added a comment - - edited I already added a comment on it in the patch: // TODO: Rename it to getCurrentApplicationAttempt I think it is clear. What do you think about it?
        Hide
        yqwang Yuqi Wang added a comment -

        Jian He
        Is this patch ready to accept?

        Show
        yqwang Yuqi Wang added a comment - Jian He Is this patch ready to accept?
        Hide
        jianhe Jian He added a comment -

        Yuqi Wang, yep, I've committed to trunk and branch-2,
        branch-2.8 doesn't apply, could you provide a patch for branch-2.8 ?

        Show
        jianhe Jian He added a comment - Yuqi Wang , yep, I've committed to trunk and branch-2, branch-2.8 doesn't apply, could you provide a patch for branch-2.8 ?
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12178 (See https://builds.apache.org/job/Hadoop-trunk-Commit/12178/)
        YARN-6959. RM may allocate wrong AM Container for new attempt. (jianhe: rev e2f6299f6f580d7a03f2377d19ac85f55fd4e73b)

        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12178 (See https://builds.apache.org/job/Hadoop-trunk-Commit/12178/ ) YARN-6959 . RM may allocate wrong AM Container for new attempt. (jianhe: rev e2f6299f6f580d7a03f2377d19ac85f55fd4e73b) (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
        Hide
        yqwang Yuqi Wang added a comment -

        Add updated patch for 2.7 and new patch for 2.8.

        Show
        yqwang Yuqi Wang added a comment - Add updated patch for 2.7 and new patch for 2.8.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 11m 31s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
              branch-2.7 Compile Tests
        +1 mvninstall 7m 56s branch-2.7 passed
        +1 compile 0m 25s branch-2.7 passed with JDK v1.8.0_144
        +1 compile 0m 28s branch-2.7 passed with JDK v1.7.0_131
        +1 checkstyle 0m 35s branch-2.7 passed
        +1 mvnsite 0m 34s branch-2.7 passed
        +1 findbugs 1m 3s branch-2.7 passed
        +1 javadoc 0m 18s branch-2.7 passed with JDK v1.8.0_144
        +1 javadoc 0m 22s branch-2.7 passed with JDK v1.7.0_131
              Patch Compile Tests
        +1 mvninstall 0m 27s the patch passed
        +1 compile 0m 21s the patch passed with JDK v1.8.0_144
        +1 javac 0m 21s the patch passed
        +1 compile 0m 25s the patch passed with JDK v1.7.0_131
        +1 javac 0m 25s the patch passed
        -0 checkstyle 0m 32s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 7 new + 1298 unchanged - 3 fixed = 1305 total (was 1301)
        +1 mvnsite 0m 31s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 9s the patch passed
        +1 javadoc 0m 15s the patch passed with JDK v1.8.0_144
        +1 javadoc 0m 20s the patch passed with JDK v1.7.0_131
              Other Tests
        -1 unit 50m 57s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131.
        +1 asflicense 0m 17s The patch does not generate ASF License warnings.
        129m 31s



        Reason Tests
        JDK v1.8.0_144 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
          hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.TestAMAuthorization
        JDK v1.7.0_131 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
          hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.TestAMAuthorization



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:67e87c9
        JIRA Issue YARN-6959
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12881934/YARN-6959-branch-2.7.002.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 613687b8a6d5 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision branch-2.7 / ae85407
        Default Java 1.7.0_131
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_144 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16905/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/16905/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt
        JDK v1.7.0_131 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16905/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/16905/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 11m 31s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       branch-2.7 Compile Tests +1 mvninstall 7m 56s branch-2.7 passed +1 compile 0m 25s branch-2.7 passed with JDK v1.8.0_144 +1 compile 0m 28s branch-2.7 passed with JDK v1.7.0_131 +1 checkstyle 0m 35s branch-2.7 passed +1 mvnsite 0m 34s branch-2.7 passed +1 findbugs 1m 3s branch-2.7 passed +1 javadoc 0m 18s branch-2.7 passed with JDK v1.8.0_144 +1 javadoc 0m 22s branch-2.7 passed with JDK v1.7.0_131       Patch Compile Tests +1 mvninstall 0m 27s the patch passed +1 compile 0m 21s the patch passed with JDK v1.8.0_144 +1 javac 0m 21s the patch passed +1 compile 0m 25s the patch passed with JDK v1.7.0_131 +1 javac 0m 25s the patch passed -0 checkstyle 0m 32s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 7 new + 1298 unchanged - 3 fixed = 1305 total (was 1301) +1 mvnsite 0m 31s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 9s the patch passed +1 javadoc 0m 15s the patch passed with JDK v1.8.0_144 +1 javadoc 0m 20s the patch passed with JDK v1.7.0_131       Other Tests -1 unit 50m 57s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 129m 31s Reason Tests JDK v1.8.0_144 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_131 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:67e87c9 JIRA Issue YARN-6959 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12881934/YARN-6959-branch-2.7.002.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 613687b8a6d5 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2.7 / ae85407 Default Java 1.7.0_131 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_144 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16905/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/16905/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt JDK v1.7.0_131 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16905/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/16905/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        jianhe Jian He added a comment -

        Yuqi Wang, TestFairScheduler is failing with the patch , can you take a look ?

        Show
        jianhe Jian He added a comment - Yuqi Wang , TestFairScheduler is failing with the patch , can you take a look ?
        Hide
        yqwang Yuqi Wang added a comment -

        Jian He

        I updated the new patch for 2.7, do you know how to trigger jenkins against branch-2.7?

        Show
        yqwang Yuqi Wang added a comment - Jian He I updated the new patch for 2.7, do you know how to trigger jenkins against branch-2.7?
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 12m 20s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
              branch-2.7 Compile Tests
        +1 mvninstall 7m 23s branch-2.7 passed
        +1 compile 0m 31s branch-2.7 passed with JDK v1.8.0_144
        +1 compile 0m 31s branch-2.7 passed with JDK v1.7.0_131
        +1 checkstyle 0m 42s branch-2.7 passed
        +1 mvnsite 0m 40s branch-2.7 passed
        +1 findbugs 1m 10s branch-2.7 passed
        +1 javadoc 0m 24s branch-2.7 passed with JDK v1.8.0_144
        +1 javadoc 0m 27s branch-2.7 passed with JDK v1.7.0_131
              Patch Compile Tests
        +1 mvninstall 0m 33s the patch passed
        +1 compile 0m 29s the patch passed with JDK v1.8.0_144
        +1 javac 0m 29s the patch passed
        +1 compile 0m 29s the patch passed with JDK v1.7.0_131
        +1 javac 0m 29s the patch passed
        -0 checkstyle 0m 31s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 7 new + 1299 unchanged - 3 fixed = 1306 total (was 1302)
        +1 mvnsite 0m 36s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 29s the patch passed
        +1 javadoc 0m 23s the patch passed with JDK v1.8.0_144
        +1 javadoc 0m 27s the patch passed with JDK v1.7.0_131
              Other Tests
        -1 unit 51m 6s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131.
        +1 asflicense 0m 15s The patch does not generate ASF License warnings.
        134m 6s



        Reason Tests
        JDK v1.8.0_144 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
          hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
          hadoop.yarn.server.resourcemanager.TestClientRMTokens
        JDK v1.7.0_131 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
          hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
          hadoop.yarn.server.resourcemanager.TestClientRMTokens



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:67e87c9
        JIRA Issue YARN-6959
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882074/YARN-6959-branch-2.7.003.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 3c00a96098de 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision branch-2.7 / ae85407
        Default Java 1.7.0_131
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_144 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16925/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/16925/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt
        JDK v1.7.0_131 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16925/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/16925/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 12m 20s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       branch-2.7 Compile Tests +1 mvninstall 7m 23s branch-2.7 passed +1 compile 0m 31s branch-2.7 passed with JDK v1.8.0_144 +1 compile 0m 31s branch-2.7 passed with JDK v1.7.0_131 +1 checkstyle 0m 42s branch-2.7 passed +1 mvnsite 0m 40s branch-2.7 passed +1 findbugs 1m 10s branch-2.7 passed +1 javadoc 0m 24s branch-2.7 passed with JDK v1.8.0_144 +1 javadoc 0m 27s branch-2.7 passed with JDK v1.7.0_131       Patch Compile Tests +1 mvninstall 0m 33s the patch passed +1 compile 0m 29s the patch passed with JDK v1.8.0_144 +1 javac 0m 29s the patch passed +1 compile 0m 29s the patch passed with JDK v1.7.0_131 +1 javac 0m 29s the patch passed -0 checkstyle 0m 31s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 7 new + 1299 unchanged - 3 fixed = 1306 total (was 1302) +1 mvnsite 0m 36s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 29s the patch passed +1 javadoc 0m 23s the patch passed with JDK v1.8.0_144 +1 javadoc 0m 27s the patch passed with JDK v1.7.0_131       Other Tests -1 unit 51m 6s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131. +1 asflicense 0m 15s The patch does not generate ASF License warnings. 134m 6s Reason Tests JDK v1.8.0_144 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler   hadoop.yarn.server.resourcemanager.TestClientRMTokens JDK v1.7.0_131 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler   hadoop.yarn.server.resourcemanager.TestClientRMTokens Subsystem Report/Notes Docker Image:yetus/hadoop:67e87c9 JIRA Issue YARN-6959 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882074/YARN-6959-branch-2.7.003.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 3c00a96098de 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2.7 / ae85407 Default Java 1.7.0_131 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_144 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16925/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/16925/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt JDK v1.7.0_131 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16925/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/16925/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 15m 47s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
              branch-2.7 Compile Tests
        +1 mvninstall 9m 37s branch-2.7 passed
        +1 compile 0m 33s branch-2.7 passed with JDK v1.8.0_144
        +1 compile 0m 38s branch-2.7 passed with JDK v1.7.0_131
        +1 checkstyle 0m 45s branch-2.7 passed
        +1 mvnsite 0m 48s branch-2.7 passed
        +1 findbugs 1m 25s branch-2.7 passed
        +1 javadoc 0m 30s branch-2.7 passed with JDK v1.8.0_144
        +1 javadoc 0m 29s branch-2.7 passed with JDK v1.7.0_131
              Patch Compile Tests
        +1 mvninstall 0m 34s the patch passed
        +1 compile 0m 30s the patch passed with JDK v1.8.0_144
        +1 javac 0m 30s the patch passed
        +1 compile 0m 30s the patch passed with JDK v1.7.0_131
        +1 javac 0m 30s the patch passed
        -0 checkstyle 0m 34s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 7 new + 1298 unchanged - 3 fixed = 1305 total (was 1301)
        +1 mvnsite 0m 36s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 32s the patch passed
        +1 javadoc 0m 21s the patch passed with JDK v1.8.0_144
        +1 javadoc 0m 27s the patch passed with JDK v1.7.0_131
              Other Tests
        -1 unit 54m 17s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131.
        +1 asflicense 0m 18s The patch does not generate ASF License warnings.
        145m 10s



        Reason Tests
        JDK v1.8.0_144 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore
          hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
          hadoop.yarn.server.resourcemanager.TestAMAuthorization
        JDK v1.7.0_131 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
          hadoop.yarn.server.resourcemanager.TestAMAuthorization
          hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart
          hadoop.yarn.server.resourcemanager.TestRMHA



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:67e87c9
        JIRA Issue YARN-6959
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882090/YARN-6959-branch-2.7.004.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux a3e8c520f0b8 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision branch-2.7 / ae85407
        Default Java 1.7.0_131
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_144 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16927/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/16927/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt
        JDK v1.7.0_131 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16927/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/16927/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 15m 47s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       branch-2.7 Compile Tests +1 mvninstall 9m 37s branch-2.7 passed +1 compile 0m 33s branch-2.7 passed with JDK v1.8.0_144 +1 compile 0m 38s branch-2.7 passed with JDK v1.7.0_131 +1 checkstyle 0m 45s branch-2.7 passed +1 mvnsite 0m 48s branch-2.7 passed +1 findbugs 1m 25s branch-2.7 passed +1 javadoc 0m 30s branch-2.7 passed with JDK v1.8.0_144 +1 javadoc 0m 29s branch-2.7 passed with JDK v1.7.0_131       Patch Compile Tests +1 mvninstall 0m 34s the patch passed +1 compile 0m 30s the patch passed with JDK v1.8.0_144 +1 javac 0m 30s the patch passed +1 compile 0m 30s the patch passed with JDK v1.7.0_131 +1 javac 0m 30s the patch passed -0 checkstyle 0m 34s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 7 new + 1298 unchanged - 3 fixed = 1305 total (was 1301) +1 mvnsite 0m 36s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 32s the patch passed +1 javadoc 0m 21s the patch passed with JDK v1.8.0_144 +1 javadoc 0m 27s the patch passed with JDK v1.7.0_131       Other Tests -1 unit 54m 17s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131. +1 asflicense 0m 18s The patch does not generate ASF License warnings. 145m 10s Reason Tests JDK v1.8.0_144 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_131 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart   hadoop.yarn.server.resourcemanager.TestRMHA Subsystem Report/Notes Docker Image:yetus/hadoop:67e87c9 JIRA Issue YARN-6959 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882090/YARN-6959-branch-2.7.004.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux a3e8c520f0b8 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2.7 / ae85407 Default Java 1.7.0_131 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_144 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16927/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/16927/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt JDK v1.7.0_131 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16927/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/16927/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 11m 23s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
              branch-2.7 Compile Tests
        +1 mvninstall 7m 50s branch-2.7 passed
        +1 compile 0m 23s branch-2.7 passed with JDK v1.8.0_144
        +1 compile 0m 28s branch-2.7 passed with JDK v1.7.0_131
        +1 checkstyle 0m 37s branch-2.7 passed
        +1 mvnsite 0m 35s branch-2.7 passed
        +1 findbugs 1m 2s branch-2.7 passed
        +1 javadoc 0m 19s branch-2.7 passed with JDK v1.8.0_144
        +1 javadoc 0m 22s branch-2.7 passed with JDK v1.7.0_131
              Patch Compile Tests
        +1 mvninstall 0m 27s the patch passed
        +1 compile 0m 22s the patch passed with JDK v1.8.0_144
        +1 javac 0m 22s the patch passed
        +1 compile 0m 26s the patch passed with JDK v1.7.0_131
        +1 javac 0m 26s the patch passed
        -0 checkstyle 0m 33s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 7 new + 1298 unchanged - 3 fixed = 1305 total (was 1301)
        +1 mvnsite 0m 30s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 7s the patch passed
        +1 javadoc 0m 16s the patch passed with JDK v1.8.0_144
        +1 javadoc 0m 20s the patch passed with JDK v1.7.0_131
              Other Tests
        -1 unit 52m 6s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131.
        +1 asflicense 0m 17s The patch does not generate ASF License warnings.
        131m 25s



        Reason Tests
        JDK v1.8.0_144 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.TestAMAuthorization
        JDK v1.7.0_131 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
          hadoop.yarn.server.resourcemanager.TestAMAuthorization
          hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:67e87c9
        JIRA Issue YARN-6959
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882122/YARN-6959-branch-2.7.005.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 3f5bced4432b 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision branch-2.7 / ae85407
        Default Java 1.7.0_131
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_144 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16929/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/16929/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt
        JDK v1.7.0_131 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16929/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/16929/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 11m 23s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       branch-2.7 Compile Tests +1 mvninstall 7m 50s branch-2.7 passed +1 compile 0m 23s branch-2.7 passed with JDK v1.8.0_144 +1 compile 0m 28s branch-2.7 passed with JDK v1.7.0_131 +1 checkstyle 0m 37s branch-2.7 passed +1 mvnsite 0m 35s branch-2.7 passed +1 findbugs 1m 2s branch-2.7 passed +1 javadoc 0m 19s branch-2.7 passed with JDK v1.8.0_144 +1 javadoc 0m 22s branch-2.7 passed with JDK v1.7.0_131       Patch Compile Tests +1 mvninstall 0m 27s the patch passed +1 compile 0m 22s the patch passed with JDK v1.8.0_144 +1 javac 0m 22s the patch passed +1 compile 0m 26s the patch passed with JDK v1.7.0_131 +1 javac 0m 26s the patch passed -0 checkstyle 0m 33s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 7 new + 1298 unchanged - 3 fixed = 1305 total (was 1301) +1 mvnsite 0m 30s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 7s the patch passed +1 javadoc 0m 16s the patch passed with JDK v1.8.0_144 +1 javadoc 0m 20s the patch passed with JDK v1.7.0_131       Other Tests -1 unit 52m 6s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 131m 25s Reason Tests JDK v1.8.0_144 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_131 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler Subsystem Report/Notes Docker Image:yetus/hadoop:67e87c9 JIRA Issue YARN-6959 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882122/YARN-6959-branch-2.7.005.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 3f5bced4432b 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2.7 / ae85407 Default Java 1.7.0_131 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_144 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16929/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/16929/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt JDK v1.7.0_131 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16929/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/16929/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 11m 6s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
              branch-2.7 Compile Tests
        +1 mvninstall 7m 55s branch-2.7 passed
        +1 compile 0m 25s branch-2.7 passed with JDK v1.8.0_144
        +1 compile 0m 27s branch-2.7 passed with JDK v1.7.0_131
        +1 checkstyle 0m 34s branch-2.7 passed
        +1 mvnsite 0m 35s branch-2.7 passed
        +1 findbugs 1m 3s branch-2.7 passed
        +1 javadoc 0m 19s branch-2.7 passed with JDK v1.8.0_144
        +1 javadoc 0m 22s branch-2.7 passed with JDK v1.7.0_131
              Patch Compile Tests
        +1 mvninstall 0m 27s the patch passed
        +1 compile 0m 22s the patch passed with JDK v1.8.0_144
        +1 javac 0m 22s the patch passed
        +1 compile 0m 26s the patch passed with JDK v1.7.0_131
        +1 javac 0m 26s the patch passed
        -0 checkstyle 0m 31s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 7 new + 1298 unchanged - 3 fixed = 1305 total (was 1301)
        +1 mvnsite 0m 32s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 9s the patch passed
        +1 javadoc 0m 15s the patch passed with JDK v1.8.0_144
        +1 javadoc 0m 21s the patch passed with JDK v1.7.0_131
              Other Tests
        -1 unit 51m 27s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131.
        +1 asflicense 0m 20s The patch does not generate ASF License warnings.
        129m 50s



        Reason Tests
        JDK v1.8.0_144 Failed junit tests hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart
          hadoop.yarn.server.resourcemanager.TestAMAuthorization
          hadoop.yarn.server.resourcemanager.TestClientRMTokens
        JDK v1.7.0_131 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
          hadoop.yarn.server.resourcemanager.TestClientRMTokens



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:67e87c9
        JIRA Issue YARN-6959
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882141/YARN-6959-branch-2.7.006.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux d062de06688d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision branch-2.7 / ae85407
        Default Java 1.7.0_131
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_144 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16932/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/16932/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt
        JDK v1.7.0_131 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16932/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/16932/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 11m 6s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       branch-2.7 Compile Tests +1 mvninstall 7m 55s branch-2.7 passed +1 compile 0m 25s branch-2.7 passed with JDK v1.8.0_144 +1 compile 0m 27s branch-2.7 passed with JDK v1.7.0_131 +1 checkstyle 0m 34s branch-2.7 passed +1 mvnsite 0m 35s branch-2.7 passed +1 findbugs 1m 3s branch-2.7 passed +1 javadoc 0m 19s branch-2.7 passed with JDK v1.8.0_144 +1 javadoc 0m 22s branch-2.7 passed with JDK v1.7.0_131       Patch Compile Tests +1 mvninstall 0m 27s the patch passed +1 compile 0m 22s the patch passed with JDK v1.8.0_144 +1 javac 0m 22s the patch passed +1 compile 0m 26s the patch passed with JDK v1.7.0_131 +1 javac 0m 26s the patch passed -0 checkstyle 0m 31s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 7 new + 1298 unchanged - 3 fixed = 1305 total (was 1301) +1 mvnsite 0m 32s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 9s the patch passed +1 javadoc 0m 15s the patch passed with JDK v1.8.0_144 +1 javadoc 0m 21s the patch passed with JDK v1.7.0_131       Other Tests -1 unit 51m 27s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131. +1 asflicense 0m 20s The patch does not generate ASF License warnings. 129m 50s Reason Tests JDK v1.8.0_144 Failed junit tests hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.TestClientRMTokens JDK v1.7.0_131 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.TestClientRMTokens Subsystem Report/Notes Docker Image:yetus/hadoop:67e87c9 JIRA Issue YARN-6959 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882141/YARN-6959-branch-2.7.006.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux d062de06688d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2.7 / ae85407 Default Java 1.7.0_131 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_144 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16932/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/16932/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_131.txt JDK v1.7.0_131 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16932/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/16932/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        yqwang Yuqi Wang added a comment -

        Jian He
        Seems the UT failures are not caused by my patch, please check.

        Show
        yqwang Yuqi Wang added a comment - Jian He Seems the UT failures are not caused by my patch, please check.
        Hide
        jianhe Jian He added a comment -

        Yuqi Wang, Thanks for the patch, I've committed the branch-2.7 patch.
        Could you upload a patch for branch-2.8 too ? branch-2.8 also have some conflicts

        Show
        jianhe Jian He added a comment - Yuqi Wang , Thanks for the patch, I've committed the branch-2.7 patch. Could you upload a patch for branch-2.8 too ? branch-2.8 also have some conflicts
        Hide
        yqwang Yuqi Wang added a comment -

        Update 2.8 patch

        Show
        yqwang Yuqi Wang added a comment - Update 2.8 patch
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 12m 43s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
              branch-2.8 Compile Tests
        +1 mvninstall 6m 58s branch-2.8 passed
        +1 compile 0m 29s branch-2.8 passed with JDK v1.8.0_144
        +1 compile 0m 31s branch-2.8 passed with JDK v1.7.0_151
        +1 checkstyle 0m 21s branch-2.8 passed
        +1 mvnsite 0m 37s branch-2.8 passed
        +1 findbugs 1m 11s branch-2.8 passed
        +1 javadoc 0m 21s branch-2.8 passed with JDK v1.8.0_144
        +1 javadoc 0m 24s branch-2.8 passed with JDK v1.7.0_151
              Patch Compile Tests
        +1 mvninstall 0m 30s the patch passed
        +1 compile 0m 27s the patch passed with JDK v1.8.0_144
        +1 javac 0m 27s the patch passed
        +1 compile 0m 29s the patch passed with JDK v1.7.0_151
        +1 javac 0m 29s the patch passed
        -0 checkstyle 0m 18s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 299 unchanged - 0 fixed = 300 total (was 299)
        +1 mvnsite 0m 34s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 20s the patch passed
        +1 javadoc 0m 23s the patch passed with JDK v1.8.0_144
        +1 javadoc 0m 23s the patch passed with JDK v1.7.0_151
              Other Tests
        -1 unit 76m 5s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_151.
        +1 asflicense 0m 17s The patch does not generate ASF License warnings.
        182m 16s



        Reason Tests
        JDK v1.8.0_144 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization
          hadoop.yarn.server.resourcemanager.TestClientRMTokens
        JDK v1.7.0_151 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerLazyPreemption
          hadoop.yarn.server.resourcemanager.TestAMAuthorization
          hadoop.yarn.server.resourcemanager.TestClientRMTokens



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:d946387
        JIRA Issue YARN-6959
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882550/YARN-6959-branch-2.8.002.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux e1cd747613e6 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision branch-2.8 / 2810e6a
        Default Java 1.7.0_151
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_144 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_151
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16987/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/16987/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_151.txt
        JDK v1.7.0_151 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16987/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/16987/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 12m 43s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       branch-2.8 Compile Tests +1 mvninstall 6m 58s branch-2.8 passed +1 compile 0m 29s branch-2.8 passed with JDK v1.8.0_144 +1 compile 0m 31s branch-2.8 passed with JDK v1.7.0_151 +1 checkstyle 0m 21s branch-2.8 passed +1 mvnsite 0m 37s branch-2.8 passed +1 findbugs 1m 11s branch-2.8 passed +1 javadoc 0m 21s branch-2.8 passed with JDK v1.8.0_144 +1 javadoc 0m 24s branch-2.8 passed with JDK v1.7.0_151       Patch Compile Tests +1 mvninstall 0m 30s the patch passed +1 compile 0m 27s the patch passed with JDK v1.8.0_144 +1 javac 0m 27s the patch passed +1 compile 0m 29s the patch passed with JDK v1.7.0_151 +1 javac 0m 29s the patch passed -0 checkstyle 0m 18s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 299 unchanged - 0 fixed = 300 total (was 299) +1 mvnsite 0m 34s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 20s the patch passed +1 javadoc 0m 23s the patch passed with JDK v1.8.0_144 +1 javadoc 0m 23s the patch passed with JDK v1.7.0_151       Other Tests -1 unit 76m 5s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_151. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 182m 16s Reason Tests JDK v1.8.0_144 Failed junit tests hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.TestClientRMTokens JDK v1.7.0_151 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerLazyPreemption   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.TestClientRMTokens Subsystem Report/Notes Docker Image:yetus/hadoop:d946387 JIRA Issue YARN-6959 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12882550/YARN-6959-branch-2.8.002.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux e1cd747613e6 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2.8 / 2810e6a Default Java 1.7.0_151 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_144 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_151 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16987/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/16987/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_151.txt JDK v1.7.0_151 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16987/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/16987/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        yqwang Yuqi Wang added a comment -

        Jian He
        Seems the UT failures are not caused by my patch, please check.

        Show
        yqwang Yuqi Wang added a comment - Jian He Seems the UT failures are not caused by my patch, please check.
        Hide
        jianhe Jian He added a comment -

        The patch is committed in branch-2.8 too, thanks Yuqi Wang !

        Show
        jianhe Jian He added a comment - The patch is committed in branch-2.8 too, thanks Yuqi Wang !
        Hide
        yqwang Yuqi Wang added a comment -

        Jian He
        Great! Thank you so much!

        Show
        yqwang Yuqi Wang added a comment - Jian He Great! Thank you so much!

          People

          • Assignee:
            yqwang Yuqi Wang
            Reporter:
            yqwang Yuqi Wang
          • Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

            • Due:
              Created:
              Updated:
              Resolved:

              Development