Aug 20, 2016 9:57:01 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver as a provider class Aug 20, 2016 9:57:01 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices as a root resource class Aug 20, 2016 9:57:01 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Aug 20, 2016 9:57:01 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.19 02/11/2015 03:25 AM' Aug 20, 2016 9:57:01 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Aug 20, 2016 9:57:02 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Aug 20, 2016 9:57:02 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices to GuiceManagedComponentProvider with the scope "Singleton" Aug 20, 2016 9:57:03 PM com.google.inject.servlet.GuiceFilter setPipeline WARNING: Multiple Servlet injectors detected. This is a warning indicating that you have more than one GuiceFilter running in your web application. If this is deliberate, you may safely ignore this message. If this is NOT deliberate however, your application may not work as expected. Aug 20, 2016 9:57:03 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices as a root resource class Aug 20, 2016 9:57:03 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Aug 20, 2016 9:57:03 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver as a provider class Aug 20, 2016 9:57:03 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.19 02/11/2015 03:25 AM' Aug 20, 2016 9:57:03 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Aug 20, 2016 9:57:03 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Aug 20, 2016 9:57:03 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices to GuiceManagedComponentProvider with the scope "Singleton" Aug 20, 2016 9:57:04 PM com.google.inject.servlet.GuiceFilter setPipeline WARNING: Multiple Servlet injectors detected. This is a warning indicating that you have more than one GuiceFilter running in your web application. If this is deliberate, you may safely ignore this message. If this is NOT deliberate however, your application may not work as expected. Aug 20, 2016 9:57:04 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices as a root resource class Aug 20, 2016 9:57:04 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Aug 20, 2016 9:57:04 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver as a provider class Aug 20, 2016 9:57:04 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.19 02/11/2015 03:25 AM' Aug 20, 2016 9:57:04 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Aug 20, 2016 9:57:04 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Aug 20, 2016 9:57:04 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices to GuiceManagedComponentProvider with the scope "Singleton" Aug 20, 2016 9:57:04 PM com.google.inject.servlet.GuiceFilter setPipeline WARNING: Multiple Servlet injectors detected. This is a warning indicating that you have more than one GuiceFilter running in your web application. If this is deliberate, you may safely ignore this message. If this is NOT deliberate however, your application may not work as expected. Aug 20, 2016 9:57:05 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices as a root resource class Aug 20, 2016 9:57:05 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Aug 20, 2016 9:57:05 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver as a provider class Aug 20, 2016 9:57:05 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.19 02/11/2015 03:25 AM' Aug 20, 2016 9:57:05 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Aug 20, 2016 9:57:05 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Aug 20, 2016 9:57:05 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices to GuiceManagedComponentProvider with the scope "Singleton" 2016-08-20 21:57:06,926 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000001 transitioned from NEW to LOCALIZED 2016-08-20 21:57:06,928 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEvent.EventType: CONTAINER_INIT 2016-08-20 21:57:06,928 INFO [AsyncDispatcher event handler] containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1471710419543_0001 2016-08-20 21:57:06,929 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEvent.EventType: LAUNCH_CONTAINER 2016-08-20 21:57:06,931 DEBUG [Thread-346] service.AbstractService: Service: org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl entered state INITED 2016-08-20 21:57:06,937 INFO [Thread-346] client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:37347 2016-08-20 21:57:06,938 DEBUG [Thread-346] security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.yarn.client.RMProxy.getProxy(RMProxy.java:163) 2016-08-20 21:57:06,939 DEBUG [ContainersLauncher #0] concurrent.HadoopThreadPoolExecutor: beforeExecute in thread: ContainersLauncher #0, runnable type: java.util.concurrent.FutureTask 2016-08-20 21:57:06,939 DEBUG [Thread-346] ipc.YarnRPC: Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC 2016-08-20 21:57:06,939 DEBUG [Thread-346] ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ApplicationMasterProtocol 2016-08-20 21:57:06,944 DEBUG [Thread-346] ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@21109725 2016-08-20 21:57:06,975 DEBUG [Thread-346] service.AbstractService: Service org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl is started 2016-08-20 21:57:06,977 DEBUG [Thread-346] service.AbstractService: Service: org.apache.hadoop.yarn.client.api.impl.NMClientImpl entered state INITED 2016-08-20 21:57:06,979 INFO [Thread-346] impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 2016-08-20 21:57:06,979 DEBUG [Thread-346] ipc.YarnRPC: Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC 2016-08-20 21:57:06,980 DEBUG [Thread-346] service.AbstractService: Service org.apache.hadoop.yarn.client.api.impl.NMClientImpl is started 2016-08-20 21:57:06,981 DEBUG [ContainersLauncher #0] security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:327) 2016-08-20 21:57:06,983 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:06,983 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:06,984 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:06,984 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:06,984 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:06,984 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:06,984 DEBUG [Thread-346] ipc.Client: The ping interval is 60000 ms. 2016-08-20 21:57:06,985 DEBUG [Thread-346] ipc.Client: Connecting to localhost/127.0.0.1:37347 2016-08-20 21:57:06,985 DEBUG [IPC Server listener on 37347] ipc.Server: Server connection from 127.0.0.1:46672; # active connections: 1; # queued calls: 0 2016-08-20 21:57:06,986 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:06,986 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:06,986 DEBUG [Thread-346] security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788) 2016-08-20 21:57:06,984 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:06,990 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:06,990 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:06,990 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:06,990 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:46239 of type STATUS_UPDATE 2016-08-20 21:57:06,991 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:06,991 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:46239 clusterResources: 2016-08-20 21:57:06,991 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:46239 availableResource: 2016-08-20 21:57:06,991 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:06,991 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,000 DEBUG [Thread-346] security.SaslRpcClient: Sending sasl message state: NEGOTIATE 2016-08-20 21:57:07,000 DEBUG [Socket Reader #1 for port 37347] ipc.Server: got #-33 2016-08-20 21:57:07,001 DEBUG [Socket Reader #1 for port 37347] security.SaslRpcServer: Created SASL server with mechanism = DIGEST-MD5 2016-08-20 21:57:07,001 DEBUG [Socket Reader #1 for port 37347] ipc.Server: Socket Reader #1 for port 37347: responding to null from 127.0.0.1:46672 Call#-33 Retry#-1 2016-08-20 21:57:07,001 DEBUG [Socket Reader #1 for port 37347] ipc.Server: Socket Reader #1 for port 37347: responding to null from 127.0.0.1:46672 Call#-33 Retry#-1 Wrote 166 bytes. 2016-08-20 21:57:07,002 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,003 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 1 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000001, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:07,003 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,003 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type STATUS_UPDATE 2016-08-20 21:57:07,005 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,005 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:43931 clusterResources: 2016-08-20 21:57:07,005 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:43931 availableResource: 2016-08-20 21:57:07,005 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,005 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,006 DEBUG [Thread-346] security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB info:org.apache.hadoop.yarn.security.SchedulerSecurityInfo$1@72d9b8d 2016-08-20 21:57:07,008 DEBUG [Thread-346] security.AMRMTokenSelector: Looking for a token with service 127.0.0.1:37347 2016-08-20 21:57:07,008 DEBUG [Thread-346] security.AMRMTokenSelector: Token kind is YARN_AM_RM_TOKEN and the token's service name is 127.0.0.1:37347 2016-08-20 21:57:07,009 DEBUG [Thread-346] security.SaslRpcClient: Creating SASL DIGEST-MD5(TOKEN) client to authenticate to service at default 2016-08-20 21:57:07,010 DEBUG [Thread-346] security.SaslRpcClient: Use TOKEN authentication for protocol ApplicationMasterProtocolPB 2016-08-20 21:57:07,012 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting username: Cg0KCQgBENe0m8bqKhABEJKcsZH6/////wE= 2016-08-20 21:57:07,012 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting userPassword 2016-08-20 21:57:07,012 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting realm: default 2016-08-20 21:57:07,014 DEBUG [Thread-346] security.SaslRpcClient: Sending sasl message state: INITIATE token: "charset=utf-8,username=\"Cg0KCQgBENe0m8bqKhABEJKcsZH6/////wE=\",realm=\"default\",nonce=\"W/Dx84eEOIZpvgX6Uk6oOajyL1JYZ56qIIeeygHx\",nc=00000001,cnonce=\"YTwB7B50tUh/qfiPf+tJkDmXRT1NKpUIj9aLcyZd\",digest-uri=\"/default\",maxbuf=65536,response=50d337641cdc7f98d611ee646f320aab,qop=auth" auths { method: "TOKEN" mechanism: "DIGEST-MD5" protocol: "" serverId: "default" } 2016-08-20 21:57:07,015 DEBUG [Socket Reader #1 for port 37347] ipc.Server: got #-33 2016-08-20 21:57:07,016 DEBUG [Socket Reader #1 for port 37347] ipc.Server: Have read input token of size 274 for processing by saslServer.evaluateResponse() 2016-08-20 21:57:07,016 DEBUG [Socket Reader #1 for port 37347] security.AMRMTokenSecretManager: Trying to retrieve password for appattempt_1471710419543_0001_000001 2016-08-20 21:57:07,017 DEBUG [Socket Reader #1 for port 37347] security.SaslRpcServer: SASL server DIGEST-MD5 callback: setting password for client: appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,018 DEBUG [Socket Reader #1 for port 37347] security.SaslRpcServer: SASL server DIGEST-MD5 callback: setting canonicalized client ID: appattempt_1471710419543_0001_000001 2016-08-20 21:57:07,018 DEBUG [Socket Reader #1 for port 37347] ipc.Server: Will send SUCCESS token of size 40 from saslServer. 2016-08-20 21:57:07,018 DEBUG [Socket Reader #1 for port 37347] ipc.Server: SASL server context established. Negotiated QoP is auth 2016-08-20 21:57:07,019 DEBUG [Socket Reader #1 for port 37347] ipc.Server: SASL server successfully authenticated client: appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,019 INFO [Socket Reader #1 for port 37347] ipc.Server: Auth successful for appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,019 DEBUG [Socket Reader #1 for port 37347] ipc.Server: Socket Reader #1 for port 37347: responding to null from 127.0.0.1:46672 Call#-33 Retry#-1 2016-08-20 21:57:07,019 DEBUG [Socket Reader #1 for port 37347] ipc.Server: Socket Reader #1 for port 37347: responding to null from 127.0.0.1:46672 Call#-33 Retry#-1 Wrote 64 bytes. 2016-08-20 21:57:07,032 DEBUG [Thread-346] ipc.Client: Negotiated QOP is :auth 2016-08-20 21:57:07,044 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:37347 from root] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:37347 from root: starting, having connections 3 2016-08-20 21:57:07,046 DEBUG [IPC Parameter Sending Thread #0] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:37347 from root sending #7 2016-08-20 21:57:07,049 DEBUG [Socket Reader #1 for port 37347] ipc.Server: got #-3 2016-08-20 21:57:07,053 DEBUG [Socket Reader #1 for port 37347] ipc.Server: Successfully authorized userInfo { } protocol: "org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB" 2016-08-20 21:57:07,053 DEBUG [Socket Reader #1 for port 37347] ipc.Server: got #7 2016-08-20 21:57:07,054 DEBUG [IPC Server handler 0 on 37347] ipc.Server: IPC Server handler 0 on 37347: org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.registerApplicationMaster from 127.0.0.1:46672 Call#7 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER 2016-08-20 21:57:07,056 DEBUG [IPC Server handler 0 on 37347] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:TOKEN) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419) 2016-08-20 21:57:07,063 INFO [IPC Server handler 0 on 37347] resourcemanager.ApplicationMasterService: AM registration appattempt_1471710419543_0001_000001 2016-08-20 21:57:07,063 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.event.RMAppAttemptRegistrationEvent.EventType: REGISTERED 2016-08-20 21:57:07,063 DEBUG [AsyncDispatcher event handler] attempt.RMAppAttemptImpl: Processing event for appattempt_1471710419543_0001_000001 of type REGISTERED 2016-08-20 21:57:07,064 INFO [AsyncDispatcher event handler] attempt.RMAppAttemptImpl: appattempt_1471710419543_0001_000001 State change from LAUNCHED to RUNNING 2016-08-20 21:57:07,064 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppEvent.EventType: ATTEMPT_REGISTERED 2016-08-20 21:57:07,064 DEBUG [AsyncDispatcher event handler] rmapp.RMAppImpl: Processing event for application_1471710419543_0001 of type ATTEMPT_REGISTERED 2016-08-20 21:57:07,064 INFO [AsyncDispatcher event handler] rmapp.RMAppImpl: application_1471710419543_0001 State change from ACCEPTED to RUNNING on event=ATTEMPT_REGISTERED 2016-08-20 21:57:07,067 INFO [IPC Server handler 0 on 37347] resourcemanager.RMAuditLogger: USER=root IP=127.0.0.1 OPERATION=Register App Master TARGET=ApplicationMasterService RESULT=SUCCESS APPID=application_1471710419543_0001 APPATTEMPTID=appattempt_1471710419543_0001_000001 2016-08-20 21:57:07,069 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEvent.EventType: CONTAINER_LAUNCHED 2016-08-20 21:57:07,070 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000001 of type CONTAINER_LAUNCHED 2016-08-20 21:57:07,072 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000001 transitioned from LOCALIZED to RUNNING 2016-08-20 21:57:07,072 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerStartMonitoringEvent.EventType: START_MONITORING_CONTAINER 2016-08-20 21:57:07,084 DEBUG [IPC Server handler 0 on 37347] ipc.Server: Served: registerApplicationMaster queueTime= 5 procesingTime= 25 2016-08-20 21:57:07,084 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,085 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:07,085 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,085 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:07,086 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,086 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:07,086 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:07,086 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,086 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,087 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,087 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:07,087 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,087 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:46239 of type STATUS_UPDATE 2016-08-20 21:57:07,088 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,088 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:46239 clusterResources: 2016-08-20 21:57:07,088 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:46239 availableResource: 2016-08-20 21:57:07,088 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,088 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,098 DEBUG [IPC Server handler 0 on 37347] ipc.Server: IPC Server handler 0 on 37347: responding to org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.registerApplicationMaster from 127.0.0.1:46672 Call#7 Retry#0 2016-08-20 21:57:07,099 DEBUG [IPC Server handler 0 on 37347] ipc.Server: IPC Server handler 0 on 37347: responding to org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.registerApplicationMaster from 127.0.0.1:46672 Call#7 Retry#0 Wrote 50 bytes. 2016-08-20 21:57:07,099 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:37347 from root] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:37347 from root got value #7 2016-08-20 21:57:07,100 DEBUG [Thread-346] ipc.ProtobufRpcEngine: Call: registerApplicationMaster took 116ms 2016-08-20 21:57:07,101 DEBUG [Thread-346] util.RackResolver: Resolved localhost to /default-rack 2016-08-20 21:57:07,103 DEBUG [Thread-346] impl.RemoteRequestsTable: Added priority=1 2016-08-20 21:57:07,103 DEBUG [Thread-346] impl.RemoteRequestsTable: Added resourceName=localhost 2016-08-20 21:57:07,103 DEBUG [Thread-346] impl.RemoteRequestsTable: Added Execution Type=GUARANTEED 2016-08-20 21:57:07,103 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,104 DEBUG [Thread-346] impl.AMRMClientImpl: addResourceRequest: applicationId= priority=1 resourceName=localhost numContainers=1 #asks=1 2016-08-20 21:57:07,104 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 1 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000001, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:07,104 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,104 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type STATUS_UPDATE 2016-08-20 21:57:07,106 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,106 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:43931 clusterResources: 2016-08-20 21:57:07,106 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:43931 availableResource: 2016-08-20 21:57:07,107 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,107 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,107 DEBUG [Thread-346] impl.RemoteRequestsTable: Added resourceName=/default-rack 2016-08-20 21:57:07,107 DEBUG [Thread-346] impl.RemoteRequestsTable: Added Execution Type=GUARANTEED 2016-08-20 21:57:07,108 DEBUG [Thread-346] impl.AMRMClientImpl: addResourceRequest: applicationId= priority=1 resourceName=/default-rack numContainers=1 #asks=2 2016-08-20 21:57:07,108 DEBUG [Thread-346] impl.RemoteRequestsTable: Added resourceName=* 2016-08-20 21:57:07,108 DEBUG [Thread-346] impl.RemoteRequestsTable: Added Execution Type=GUARANTEED 2016-08-20 21:57:07,108 DEBUG [Thread-346] impl.AMRMClientImpl: addResourceRequest: applicationId= priority=1 resourceName=* numContainers=1 #asks=3 2016-08-20 21:57:07,109 DEBUG [Thread-346] util.RackResolver: Resolved localhost to /default-rack 2016-08-20 21:57:07,109 DEBUG [Thread-346] impl.AMRMClientImpl: addResourceRequest: applicationId= priority=1 resourceName=localhost numContainers=2 #asks=3 2016-08-20 21:57:07,110 DEBUG [Thread-346] impl.AMRMClientImpl: addResourceRequest: applicationId= priority=1 resourceName=/default-rack numContainers=2 #asks=3 2016-08-20 21:57:07,110 DEBUG [Thread-346] impl.AMRMClientImpl: addResourceRequest: applicationId= priority=1 resourceName=* numContainers=2 #asks=3 2016-08-20 21:57:07,110 DEBUG [Thread-346] util.RackResolver: Resolved localhost to /default-rack 2016-08-20 21:57:07,111 DEBUG [Thread-346] impl.AMRMClientImpl: addResourceRequest: applicationId= priority=1 resourceName=localhost numContainers=3 #asks=3 2016-08-20 21:57:07,111 DEBUG [Thread-346] impl.AMRMClientImpl: addResourceRequest: applicationId= priority=1 resourceName=/default-rack numContainers=3 #asks=3 2016-08-20 21:57:07,111 DEBUG [Thread-346] impl.AMRMClientImpl: addResourceRequest: applicationId= priority=1 resourceName=* numContainers=3 #asks=3 2016-08-20 21:57:07,118 INFO [ContainersLauncher #0] nodemanager.DefaultContainerExecutor: launchContainer: [nice, -n, 0, bash, /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-1_0/usercache/root/appcache/application_1471710419543_0001/container_1471710419543_0001_01_000001/default_container_executor.sh] 2016-08-20 21:57:07,148 DEBUG [IPC Parameter Sending Thread #0] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:37347 from root sending #8 2016-08-20 21:57:07,148 DEBUG [Socket Reader #1 for port 37347] ipc.Server: got #8 2016-08-20 21:57:07,148 DEBUG [IPC Server handler 1 on 37347] ipc.Server: IPC Server handler 1 on 37347: org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.allocate from 127.0.0.1:46672 Call#8 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER 2016-08-20 21:57:07,149 DEBUG [IPC Server handler 1 on 37347] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:TOKEN) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419) 2016-08-20 21:57:07,154 DEBUG [IPC Server handler 1 on 37347] capacity.CapacityScheduler: allocate: pre-update appattempt_1471710419543_0001_000001 ask size =3 2016-08-20 21:57:07,154 DEBUG [IPC Server handler 1 on 37347] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.083333336 Partition: 2016-08-20 21:57:07,154 DEBUG [IPC Server handler 1 on 37347] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=1024 2016-08-20 21:57:07,155 DEBUG [IPC Server handler 1 on 37347] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 0, Capability: , # Containers: 0, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,155 DEBUG [IPC Server handler 1 on 37347] scheduler.ActiveUsersManager: User root added to activeUsers, currently: 1 2016-08-20 21:57:07,155 DEBUG [IPC Server handler 1 on 37347] capacity.CapacityScheduler: allocate: post-update 2016-08-20 21:57:07,155 DEBUG [IPC Server handler 1 on 37347] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.083333336 Partition: 2016-08-20 21:57:07,156 DEBUG [IPC Server handler 1 on 37347] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=1024 2016-08-20 21:57:07,156 DEBUG [IPC Server handler 1 on 37347] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 0, Capability: , # Containers: 0, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,156 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.event.RMAppAttemptStatusupdateEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,158 DEBUG [AsyncDispatcher event handler] attempt.RMAppAttemptImpl: Processing event for appattempt_1471710419543_0001_000001 of type STATUS_UPDATE 2016-08-20 21:57:07,160 DEBUG [IPC Server handler 1 on 37347] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.083333336 Partition: 2016-08-20 21:57:07,161 DEBUG [IPC Server handler 1 on 37347] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=1024 2016-08-20 21:57:07,161 DEBUG [IPC Server handler 1 on 37347] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 3, Location: localhost, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,161 DEBUG [IPC Server handler 1 on 37347] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 3, Location: /default-rack, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,161 DEBUG [IPC Server handler 1 on 37347] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 3, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,162 DEBUG [IPC Server handler 1 on 37347] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.083333336 Partition: 2016-08-20 21:57:07,166 DEBUG [IPC Server handler 1 on 37347] ipc.Server: Served: allocate queueTime= 1 procesingTime= 17 2016-08-20 21:57:07,166 DEBUG [IPC Server handler 1 on 37347] ipc.Server: IPC Server handler 1 on 37347: responding to org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.allocate from 127.0.0.1:46672 Call#8 Retry#0 2016-08-20 21:57:07,167 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:37347 from root] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:37347 from root got value #8 2016-08-20 21:57:07,168 DEBUG [Thread-346] ipc.ProtobufRpcEngine: Call: allocate took 21ms 2016-08-20 21:57:07,170 DEBUG [IPC Server handler 1 on 37347] ipc.Server: IPC Server handler 1 on 37347: responding to org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.allocate from 127.0.0.1:46672 Call#8 Retry#0 Wrote 47 bytes. 2016-08-20 21:57:07,185 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,185 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:07,186 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,186 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:07,186 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,186 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:07,187 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:07,187 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,187 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Trying to assign containers to child-queue of root 2016-08-20 21:57:07,189 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,189 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:07,190 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,190 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:46239 of type STATUS_UPDATE 2016-08-20 21:57:07,190 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,192 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: printChildQueues - queue: root child-queues: root.defaultusedCapacity=(0.083333336), label=(*) 2016-08-20 21:57:07,192 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Trying to assign to queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.083333336, absoluteUsedCapacity=0.083333336, numApps=1, numContainers=1 2016-08-20 21:57:07,192 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: assignContainers: node=localhost #applications=1 2016-08-20 21:57:07,193 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.083333336 Partition: 2016-08-20 21:57:07,193 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: Headroom calculation for user root: userLimit= queueMaxAvailRes= consumed= headroom= 2016-08-20 21:57:07,193 DEBUG [SchedulerEventDispatcher:Event Processor] fica.FiCaSchedulerApp: pre-assignContainers for application application_1471710419543_0001 2016-08-20 21:57:07,193 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.083333336 Partition: 2016-08-20 21:57:07,193 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=1024 2016-08-20 21:57:07,199 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 0, Capability: , # Containers: 0, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,199 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.083333336 Partition: 2016-08-20 21:57:07,201 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=1024 2016-08-20 21:57:07,202 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 3, Location: localhost, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,204 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 3, Location: /default-rack, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,205 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,207 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 3, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,208 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 1 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000001, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:07,208 DEBUG [SchedulerEventDispatcher:Event Processor] allocator.IncreaseContainerAllocator: Skip allocating increase request since we don't have any increase request on this node=localhost:36489 2016-08-20 21:57:07,208 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,209 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type STATUS_UPDATE 2016-08-20 21:57:07,210 DEBUG [SchedulerEventDispatcher:Event Processor] allocator.RegularContainerAllocator: assignContainers: node=localhost application=application_1471710419543_0001 priority=1 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 3, Location: localhost, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } type=NODE_LOCAL 2016-08-20 21:57:07,211 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,212 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.AppSchedulingInfo: allocate: applicationId=application_1471710419543_0001 container=container_1471710419543_0001_01_000002 host=localhost:36489 user=root resource= type=NODE_LOCAL 2016-08-20 21:57:07,213 DEBUG [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: Processing container_1471710419543_0001_01_000002 of type START 2016-08-20 21:57:07,213 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEvent.EventType: CONTAINER_ALLOCATED 2016-08-20 21:57:07,213 DEBUG [AsyncDispatcher event handler] attempt.RMAppAttemptImpl: Processing event for appattempt_1471710419543_0001_000001 of type CONTAINER_ALLOCATED 2016-08-20 21:57:07,214 INFO [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: container_1471710419543_0001_01_000002 Container Transitioned from NEW to ALLOCATED 2016-08-20 21:57:07,214 DEBUG [SchedulerEventDispatcher:Event Processor] fica.FiCaSchedulerApp: allocate: applicationAttemptId=appattempt_1471710419543_0001_000001 container=container_1471710419543_0001_01_000002 host=localhost type=NODE_LOCAL 2016-08-20 21:57:07,215 INFO [SchedulerEventDispatcher:Event Processor] resourcemanager.RMAuditLogger: USER=root OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1471710419543_0001 CONTAINERID=container_1471710419543_0001_01_000002 RESOURCE= 2016-08-20 21:57:07,217 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerNode: Assigned container container_1471710419543_0001_01_000002 of capacity on host localhost:36489, which has 1 containers, used and available after allocation 2016-08-20 21:57:07,217 DEBUG [SchedulerEventDispatcher:Event Processor] allocator.RegularContainerAllocator: Resetting scheduling opportunities 2016-08-20 21:57:07,217 INFO [SchedulerEventDispatcher:Event Processor] allocator.AbstractContainerAllocator: assignedContainer application attempt=appattempt_1471710419543_0001_000001 container=container_1471710419543_0001_01_000002 queue=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator@24efdaa6 clusterResource= type=NODE_LOCAL 2016-08-20 21:57:07,217 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: post-assignContainers for application application_1471710419543_0001 2016-08-20 21:57:07,217 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.083333336 Partition: 2016-08-20 21:57:07,218 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=2048 2016-08-20 21:57:07,218 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 0, Capability: , # Containers: 0, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,218 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.083333336 Partition: 2016-08-20 21:57:07,218 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=2048 2016-08-20 21:57:07,218 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 2, Location: localhost, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,218 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 2, Location: /default-rack, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,218 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 2, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,219 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.16666667 Partition: 2016-08-20 21:57:07,219 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.16666667 Partition: 2016-08-20 21:57:07,219 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.16666667 Partition: 2016-08-20 21:57:07,219 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: default user=root used= numContainers=2 headroom = user-resources= 2016-08-20 21:57:07,220 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Assigned to queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.16666667, absoluteUsedCapacity=0.16666667, numApps=1, numContainers=2 --> , NODE_LOCAL 2016-08-20 21:57:07,220 INFO [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.16666667, absoluteUsedCapacity=0.16666667, numApps=1, numContainers=2 2016-08-20 21:57:07,220 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: printChildQueues - queue: root child-queues: root.defaultusedCapacity=(0.16666667), label=(*) 2016-08-20 21:57:07,220 INFO [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.16666667 absoluteUsedCapacity=0.16666667 used= cluster= 2016-08-20 21:57:07,220 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: ParentQ=root assignedSoFarInThisIteration= usedCapacity=0.16666667 absoluteUsedCapacity=0.16666667 2016-08-20 21:57:07,220 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Trying to assign containers to child-queue of root 2016-08-20 21:57:07,221 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: printChildQueues - queue: root child-queues: root.defaultusedCapacity=(0.16666667), label=(*) 2016-08-20 21:57:07,221 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Trying to assign to queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.16666667, absoluteUsedCapacity=0.16666667, numApps=1, numContainers=2 2016-08-20 21:57:07,221 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: assignContainers: node=localhost #applications=1 2016-08-20 21:57:07,221 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.16666667 Partition: 2016-08-20 21:57:07,221 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: Headroom calculation for user root: userLimit= queueMaxAvailRes= consumed= headroom= 2016-08-20 21:57:07,221 DEBUG [SchedulerEventDispatcher:Event Processor] fica.FiCaSchedulerApp: pre-assignContainers for application application_1471710419543_0001 2016-08-20 21:57:07,232 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.16666667 Partition: 2016-08-20 21:57:07,233 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=2048 2016-08-20 21:57:07,233 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 0, Capability: , # Containers: 0, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,233 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.16666667 Partition: 2016-08-20 21:57:07,233 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=2048 2016-08-20 21:57:07,233 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 2, Location: localhost, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,233 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 2, Location: /default-rack, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,233 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 2, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,234 DEBUG [SchedulerEventDispatcher:Event Processor] allocator.IncreaseContainerAllocator: Skip allocating increase request since we don't have any increase request on this node=localhost:36489 2016-08-20 21:57:07,234 DEBUG [SchedulerEventDispatcher:Event Processor] allocator.RegularContainerAllocator: assignContainers: node=localhost application=application_1471710419543_0001 priority=1 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 2, Location: localhost, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } type=NODE_LOCAL 2016-08-20 21:57:07,243 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.AppSchedulingInfo: allocate: applicationId=application_1471710419543_0001 container=container_1471710419543_0001_01_000003 host=localhost:36489 user=root resource= type=NODE_LOCAL 2016-08-20 21:57:07,244 DEBUG [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: Processing container_1471710419543_0001_01_000003 of type START 2016-08-20 21:57:07,244 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEvent.EventType: CONTAINER_ALLOCATED 2016-08-20 21:57:07,245 DEBUG [AsyncDispatcher event handler] attempt.RMAppAttemptImpl: Processing event for appattempt_1471710419543_0001_000001 of type CONTAINER_ALLOCATED 2016-08-20 21:57:07,246 INFO [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: container_1471710419543_0001_01_000003 Container Transitioned from NEW to ALLOCATED 2016-08-20 21:57:07,246 DEBUG [SchedulerEventDispatcher:Event Processor] fica.FiCaSchedulerApp: allocate: applicationAttemptId=appattempt_1471710419543_0001_000001 container=container_1471710419543_0001_01_000003 host=localhost type=NODE_LOCAL 2016-08-20 21:57:07,246 INFO [SchedulerEventDispatcher:Event Processor] resourcemanager.RMAuditLogger: USER=root OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1471710419543_0001 CONTAINERID=container_1471710419543_0001_01_000003 RESOURCE= 2016-08-20 21:57:07,247 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerNode: Assigned container container_1471710419543_0001_01_000003 of capacity on host localhost:36489, which has 2 containers, used and available after allocation 2016-08-20 21:57:07,247 DEBUG [SchedulerEventDispatcher:Event Processor] allocator.RegularContainerAllocator: Resetting scheduling opportunities 2016-08-20 21:57:07,248 INFO [SchedulerEventDispatcher:Event Processor] allocator.AbstractContainerAllocator: assignedContainer application attempt=appattempt_1471710419543_0001_000001 container=container_1471710419543_0001_01_000003 queue=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator@24efdaa6 clusterResource= type=NODE_LOCAL 2016-08-20 21:57:07,248 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: post-assignContainers for application application_1471710419543_0001 2016-08-20 21:57:07,248 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.16666667 Partition: 2016-08-20 21:57:07,249 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=3072 2016-08-20 21:57:07,249 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 0, Capability: , # Containers: 0, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,249 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.16666667 Partition: 2016-08-20 21:57:07,249 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=3072 2016-08-20 21:57:07,250 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 1, Location: localhost, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,250 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 1, Location: /default-rack, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,250 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 1, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,251 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.25 Partition: 2016-08-20 21:57:07,251 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.25 Partition: 2016-08-20 21:57:07,252 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.25 Partition: 2016-08-20 21:57:07,252 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: default user=root used= numContainers=3 headroom = user-resources= 2016-08-20 21:57:07,252 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Assigned to queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=3 --> , NODE_LOCAL 2016-08-20 21:57:07,262 INFO [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=3 2016-08-20 21:57:07,262 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: printChildQueues - queue: root child-queues: root.defaultusedCapacity=(0.25), label=(*) 2016-08-20 21:57:07,263 INFO [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used= cluster= 2016-08-20 21:57:07,263 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: ParentQ=root assignedSoFarInThisIteration= usedCapacity=0.25 absoluteUsedCapacity=0.25 2016-08-20 21:57:07,263 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Trying to assign containers to child-queue of root 2016-08-20 21:57:07,263 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: printChildQueues - queue: root child-queues: root.defaultusedCapacity=(0.25), label=(*) 2016-08-20 21:57:07,264 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Trying to assign to queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=3 2016-08-20 21:57:07,264 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: assignContainers: node=localhost #applications=1 2016-08-20 21:57:07,265 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.25 Partition: 2016-08-20 21:57:07,265 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: Headroom calculation for user root: userLimit= queueMaxAvailRes= consumed= headroom= 2016-08-20 21:57:07,265 DEBUG [SchedulerEventDispatcher:Event Processor] fica.FiCaSchedulerApp: pre-assignContainers for application application_1471710419543_0001 2016-08-20 21:57:07,266 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.25 Partition: 2016-08-20 21:57:07,266 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=3072 2016-08-20 21:57:07,266 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 0, Capability: , # Containers: 0, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,267 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 1 clusterCapacity: resourceByLabel: usageratio: 0.25 Partition: 2016-08-20 21:57:07,267 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=3072 2016-08-20 21:57:07,267 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 1, Location: localhost, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,268 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 1, Location: /default-rack, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,268 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 1, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,268 DEBUG [SchedulerEventDispatcher:Event Processor] allocator.IncreaseContainerAllocator: Skip allocating increase request since we don't have any increase request on this node=localhost:36489 2016-08-20 21:57:07,269 DEBUG [SchedulerEventDispatcher:Event Processor] allocator.RegularContainerAllocator: assignContainers: node=localhost application=application_1471710419543_0001 priority=1 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 1, Location: localhost, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } type=NODE_LOCAL 2016-08-20 21:57:07,269 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.ActiveUsersManager: User root removed from activeUsers, currently: 0 2016-08-20 21:57:07,270 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.AppSchedulingInfo: allocate: applicationId=application_1471710419543_0001 container=container_1471710419543_0001_01_000004 host=localhost:36489 user=root resource= type=NODE_LOCAL 2016-08-20 21:57:07,270 DEBUG [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: Processing container_1471710419543_0001_01_000004 of type START 2016-08-20 21:57:07,270 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEvent.EventType: CONTAINER_ALLOCATED 2016-08-20 21:57:07,272 DEBUG [AsyncDispatcher event handler] attempt.RMAppAttemptImpl: Processing event for appattempt_1471710419543_0001_000001 of type CONTAINER_ALLOCATED 2016-08-20 21:57:07,272 INFO [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: container_1471710419543_0001_01_000004 Container Transitioned from NEW to ALLOCATED 2016-08-20 21:57:07,272 DEBUG [SchedulerEventDispatcher:Event Processor] fica.FiCaSchedulerApp: allocate: applicationAttemptId=appattempt_1471710419543_0001_000001 container=container_1471710419543_0001_01_000004 host=localhost type=NODE_LOCAL 2016-08-20 21:57:07,273 INFO [SchedulerEventDispatcher:Event Processor] resourcemanager.RMAuditLogger: USER=root OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1471710419543_0001 CONTAINERID=container_1471710419543_0001_01_000004 RESOURCE= 2016-08-20 21:57:07,273 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerNode: Assigned container container_1471710419543_0001_01_000004 of capacity on host localhost:36489, which has 3 containers, used and available after allocation 2016-08-20 21:57:07,273 DEBUG [SchedulerEventDispatcher:Event Processor] allocator.RegularContainerAllocator: Resetting scheduling opportunities 2016-08-20 21:57:07,274 INFO [SchedulerEventDispatcher:Event Processor] allocator.AbstractContainerAllocator: assignedContainer application attempt=appattempt_1471710419543_0001_000001 container=container_1471710419543_0001_01_000004 queue=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator@24efdaa6 clusterResource= type=NODE_LOCAL 2016-08-20 21:57:07,274 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: post-assignContainers for application application_1471710419543_0001 2016-08-20 21:57:07,274 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.25 Partition: 2016-08-20 21:57:07,275 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=4096 2016-08-20 21:57:07,275 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 0, Capability: , # Containers: 0, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,276 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.25 Partition: 2016-08-20 21:57:07,276 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=4096 2016-08-20 21:57:07,276 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 0, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:07,276 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.33333334 Partition: 2016-08-20 21:57:07,277 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.33333334 Partition: 2016-08-20 21:57:07,277 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.33333334 Partition: 2016-08-20 21:57:07,277 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: default user=root used= numContainers=4 headroom = user-resources= 2016-08-20 21:57:07,278 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Assigned to queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.33333334, absoluteUsedCapacity=0.33333334, numApps=1, numContainers=4 --> , NODE_LOCAL 2016-08-20 21:57:07,278 INFO [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.33333334, absoluteUsedCapacity=0.33333334, numApps=1, numContainers=4 2016-08-20 21:57:07,280 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: printChildQueues - queue: root child-queues: root.defaultusedCapacity=(0.33333334), label=(*) 2016-08-20 21:57:07,281 INFO [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.33333334 absoluteUsedCapacity=0.33333334 used= cluster= 2016-08-20 21:57:07,281 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: ParentQ=root assignedSoFarInThisIteration= usedCapacity=0.33333334 absoluteUsedCapacity=0.33333334 2016-08-20 21:57:07,282 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Trying to assign containers to child-queue of root 2016-08-20 21:57:07,282 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: printChildQueues - queue: root child-queues: root.defaultusedCapacity=(0.33333334), label=(*) 2016-08-20 21:57:07,282 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Trying to assign to queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.33333334, absoluteUsedCapacity=0.33333334, numApps=1, numContainers=4 2016-08-20 21:57:07,282 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: assignContainers: node=localhost #applications=1 2016-08-20 21:57:07,284 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: Skip this queue=root.default, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,284 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Assigned to queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.33333334, absoluteUsedCapacity=0.33333334, numApps=1, numContainers=4 --> , NODE_LOCAL 2016-08-20 21:57:07,284 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:46239 clusterResources: 2016-08-20 21:57:07,285 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:46239 availableResource: 2016-08-20 21:57:07,285 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,285 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,286 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:43931 clusterResources: 2016-08-20 21:57:07,286 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:43931 availableResource: 2016-08-20 21:57:07,286 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,287 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:07,288 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,288 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:07,288 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,289 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,289 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,289 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:07,290 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:07,290 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,292 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:07,293 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,293 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:46239 of type STATUS_UPDATE 2016-08-20 21:57:07,293 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,292 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,293 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,293 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:46239 clusterResources: 2016-08-20 21:57:07,293 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:46239 availableResource: 2016-08-20 21:57:07,297 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,297 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,308 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,309 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 1 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000001, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:07,310 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,310 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type STATUS_UPDATE 2016-08-20 21:57:07,311 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,313 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:43931 clusterResources: 2016-08-20 21:57:07,313 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:43931 availableResource: 2016-08-20 21:57:07,314 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,314 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,320 DEBUG [IPC Parameter Sending Thread #0] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:37347 from root sending #9 2016-08-20 21:57:07,320 DEBUG [Socket Reader #1 for port 37347] ipc.Server: got #9 2016-08-20 21:57:07,321 DEBUG [IPC Server handler 3 on 37347] ipc.Server: IPC Server handler 3 on 37347: org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.allocate from 127.0.0.1:46672 Call#9 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER 2016-08-20 21:57:07,321 DEBUG [IPC Server handler 3 on 37347] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:TOKEN) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419) 2016-08-20 21:57:07,323 DEBUG [IPC Server handler 3 on 37347] security.BaseContainerTokenSecretManager: Creating password for container_1471710419543_0001_01_000002 for user container_1471710419543_0001_01_000002 (auth:SIMPLE) to be run on NM localhost:36489 2016-08-20 21:57:07,324 DEBUG [IPC Server handler 3 on 37347] security.ContainerTokenIdentifier: Writing ContainerTokenIdentifier to RPC layer: containerId { app_attempt_id { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } id: 2 } nmHostAddr: "localhost:36489" appSubmitter: "root" resource { memory: 1024 virtual_cores: 1 } expiryTimeStamp: 1471711027322 masterKeyId: -991580041 rmIdentifier: 1471710419543 priority { priority: 1 } creationTime: 1471710427212 nodeLabelExpression: "" containerType: TASK executionType: GUARANTEED 2016-08-20 21:57:07,326 DEBUG [IPC Server handler 3 on 37347] security.ContainerTokenIdentifier: Writing ContainerTokenIdentifier to RPC layer: containerId { app_attempt_id { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } id: 2 } nmHostAddr: "localhost:36489" appSubmitter: "root" resource { memory: 1024 virtual_cores: 1 } expiryTimeStamp: 1471711027322 masterKeyId: -991580041 rmIdentifier: 1471710419543 priority { priority: 1 } creationTime: 1471710427212 nodeLabelExpression: "" containerType: TASK executionType: GUARANTEED 2016-08-20 21:57:07,326 INFO [IPC Server handler 3 on 37347] security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : localhost:36489 for container : container_1471710419543_0001_01_000002 2016-08-20 21:57:07,327 DEBUG [IPC Server handler 3 on 37347] security.BaseNMTokenSecretManager: creating password for appattempt_1471710419543_0001_000001 for user root to run on NM localhost:36489 2016-08-20 21:57:07,327 DEBUG [IPC Server handler 3 on 37347] security.NMTokenIdentifier: Writing NMTokenIdentifier to RPC layer: appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570 2016-08-20 21:57:07,328 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.event.RMAppAttemptStatusupdateEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,329 DEBUG [AsyncDispatcher event handler] attempt.RMAppAttemptImpl: Processing event for appattempt_1471710419543_0001_000001 of type STATUS_UPDATE 2016-08-20 21:57:07,330 DEBUG [IPC Server handler 3 on 37347] security.NMTokenIdentifier: Writing NMTokenIdentifier to RPC layer: appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570 2016-08-20 21:57:07,330 DEBUG [IPC Server handler 3 on 37347] rmcontainer.RMContainerImpl: Processing container_1471710419543_0001_01_000002 of type ACQUIRED 2016-08-20 21:57:07,333 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppRunningOnNodeEvent.EventType: APP_RUNNING_ON_NODE 2016-08-20 21:57:07,333 DEBUG [AsyncDispatcher event handler] rmapp.RMAppImpl: Processing event for application_1471710419543_0001 of type APP_RUNNING_ON_NODE 2016-08-20 21:57:07,333 INFO [IPC Server handler 3 on 37347] rmcontainer.RMContainerImpl: container_1471710419543_0001_01_000002 Container Transitioned from ALLOCATED to ACQUIRED 2016-08-20 21:57:07,334 DEBUG [IPC Server handler 3 on 37347] security.BaseContainerTokenSecretManager: Creating password for container_1471710419543_0001_01_000003 for user container_1471710419543_0001_01_000003 (auth:SIMPLE) to be run on NM localhost:36489 2016-08-20 21:57:07,335 DEBUG [IPC Server handler 3 on 37347] security.ContainerTokenIdentifier: Writing ContainerTokenIdentifier to RPC layer: containerId { app_attempt_id { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } id: 3 } nmHostAddr: "localhost:36489" appSubmitter: "root" resource { memory: 1024 virtual_cores: 1 } expiryTimeStamp: 1471711027333 masterKeyId: -991580041 rmIdentifier: 1471710419543 priority { priority: 1 } creationTime: 1471710427234 nodeLabelExpression: "" containerType: TASK executionType: GUARANTEED 2016-08-20 21:57:07,341 DEBUG [IPC Server handler 3 on 37347] security.ContainerTokenIdentifier: Writing ContainerTokenIdentifier to RPC layer: containerId { app_attempt_id { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } id: 3 } nmHostAddr: "localhost:36489" appSubmitter: "root" resource { memory: 1024 virtual_cores: 1 } expiryTimeStamp: 1471711027333 masterKeyId: -991580041 rmIdentifier: 1471710419543 priority { priority: 1 } creationTime: 1471710427234 nodeLabelExpression: "" containerType: TASK executionType: GUARANTEED 2016-08-20 21:57:07,342 DEBUG [IPC Server handler 3 on 37347] rmcontainer.RMContainerImpl: Processing container_1471710419543_0001_01_000003 of type ACQUIRED 2016-08-20 21:57:07,343 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppRunningOnNodeEvent.EventType: APP_RUNNING_ON_NODE 2016-08-20 21:57:07,344 DEBUG [AsyncDispatcher event handler] rmapp.RMAppImpl: Processing event for application_1471710419543_0001 of type APP_RUNNING_ON_NODE 2016-08-20 21:57:07,344 INFO [IPC Server handler 3 on 37347] rmcontainer.RMContainerImpl: container_1471710419543_0001_01_000003 Container Transitioned from ALLOCATED to ACQUIRED 2016-08-20 21:57:07,345 DEBUG [IPC Server handler 3 on 37347] security.BaseContainerTokenSecretManager: Creating password for container_1471710419543_0001_01_000004 for user container_1471710419543_0001_01_000004 (auth:SIMPLE) to be run on NM localhost:36489 2016-08-20 21:57:07,347 DEBUG [IPC Server handler 3 on 37347] security.ContainerTokenIdentifier: Writing ContainerTokenIdentifier to RPC layer: containerId { app_attempt_id { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } id: 4 } nmHostAddr: "localhost:36489" appSubmitter: "root" resource { memory: 1024 virtual_cores: 1 } expiryTimeStamp: 1471711027345 masterKeyId: -991580041 rmIdentifier: 1471710419543 priority { priority: 1 } creationTime: 1471710427269 nodeLabelExpression: "" containerType: TASK executionType: GUARANTEED 2016-08-20 21:57:07,352 DEBUG [IPC Server handler 3 on 37347] security.ContainerTokenIdentifier: Writing ContainerTokenIdentifier to RPC layer: containerId { app_attempt_id { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } id: 4 } nmHostAddr: "localhost:36489" appSubmitter: "root" resource { memory: 1024 virtual_cores: 1 } expiryTimeStamp: 1471711027345 masterKeyId: -991580041 rmIdentifier: 1471710419543 priority { priority: 1 } creationTime: 1471710427269 nodeLabelExpression: "" containerType: TASK executionType: GUARANTEED 2016-08-20 21:57:07,359 DEBUG [IPC Server handler 3 on 37347] rmcontainer.RMContainerImpl: Processing container_1471710419543_0001_01_000004 of type ACQUIRED 2016-08-20 21:57:07,360 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppRunningOnNodeEvent.EventType: APP_RUNNING_ON_NODE 2016-08-20 21:57:07,362 DEBUG [AsyncDispatcher event handler] rmapp.RMAppImpl: Processing event for application_1471710419543_0001 of type APP_RUNNING_ON_NODE 2016-08-20 21:57:07,362 INFO [IPC Server handler 3 on 37347] rmcontainer.RMContainerImpl: container_1471710419543_0001_01_000004 Container Transitioned from ALLOCATED to ACQUIRED 2016-08-20 21:57:07,362 DEBUG [IPC Server handler 3 on 37347] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.33333334 Partition: 2016-08-20 21:57:07,367 DEBUG [IPC Server handler 3 on 37347] ipc.Server: Served: allocate queueTime= 1 procesingTime= 46 2016-08-20 21:57:07,368 DEBUG [IPC Server handler 3 on 37347] ipc.Server: IPC Server handler 3 on 37347: responding to org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.allocate from 127.0.0.1:46672 Call#9 Retry#0 2016-08-20 21:57:07,369 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:37347 from root] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:37347 from root got value #9 2016-08-20 21:57:07,370 DEBUG [IPC Server handler 3 on 37347] ipc.Server: IPC Server handler 3 on 37347: responding to org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.allocate from 127.0.0.1:46672 Call#9 Retry#0 Wrote 829 bytes. 2016-08-20 21:57:07,370 DEBUG [Thread-346] ipc.ProtobufRpcEngine: Call: allocate took 50ms 2016-08-20 21:57:07,371 INFO [Thread-346] impl.AMRMClientImpl: Received new token for : localhost:36489 2016-08-20 21:57:07,374 DEBUG [Thread-346] impl.ContainerManagementProtocolProxy: Opening proxy : localhost:36489 2016-08-20 21:57:07,375 DEBUG [Thread-346] security.SecurityUtil: Acquired token Kind: NMToken, Service: 127.0.0.1:36489, Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570) 2016-08-20 21:57:07,376 DEBUG [Thread-346] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:SIMPLE) from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(ServerProxy.java:94) 2016-08-20 21:57:07,376 DEBUG [Thread-346] ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ContainerManagementProtocol 2016-08-20 21:57:07,377 DEBUG [Thread-346] ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@21109725 2016-08-20 21:57:07,378 DEBUG [Thread-346] ipc.Client: The ping interval is 60000 ms. 2016-08-20 21:57:07,378 DEBUG [Thread-346] ipc.Client: Connecting to localhost/127.0.0.1:36489 2016-08-20 21:57:07,379 DEBUG [IPC Server listener on 36489] ipc.Server: Server connection from 127.0.0.1:52624; # active connections: 1; # queued calls: 0 2016-08-20 21:57:07,380 DEBUG [Thread-346] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788) 2016-08-20 21:57:07,380 DEBUG [Thread-346] security.SaslRpcClient: Sending sasl message state: NEGOTIATE 2016-08-20 21:57:07,380 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-33 2016-08-20 21:57:07,380 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: Created SASL server with mechanism = DIGEST-MD5 2016-08-20 21:57:07,381 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52624 Call#-33 Retry#-1 2016-08-20 21:57:07,381 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52624 Call#-33 Retry#-1 Wrote 166 bytes. 2016-08-20 21:57:07,381 DEBUG [Thread-346] security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@281d3842 2016-08-20 21:57:07,382 DEBUG [Thread-346] security.NMTokenSelector: Looking for service: 127.0.0.1:36489. Current token is Kind: NMToken, Service: 127.0.0.1:36489, Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570) 2016-08-20 21:57:07,382 DEBUG [Thread-346] security.SaslRpcClient: Creating SASL DIGEST-MD5(TOKEN) client to authenticate to service at default 2016-08-20 21:57:07,383 DEBUG [Thread-346] security.SaslRpcClient: Use TOKEN authentication for protocol ContainerManagementProtocolPB 2016-08-20 21:57:07,383 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting username: Cg0KCQgBENe0m8bqKhABEg8KCWxvY2FsaG9zdBCJnQIaBHJvb3Qg4oyskQE= 2016-08-20 21:57:07,384 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting userPassword 2016-08-20 21:57:07,384 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting realm: default 2016-08-20 21:57:07,385 DEBUG [Thread-346] security.SaslRpcClient: Sending sasl message state: INITIATE token: "charset=utf-8,username=\"Cg0KCQgBENe0m8bqKhABEg8KCWxvY2FsaG9zdBCJnQIaBHJvb3Qg4oyskQE=\",realm=\"default\",nonce=\"X5S93i1rjkfHm8GdoVqyHmqYpRQn24bw/7uqpRjs\",nc=00000001,cnonce=\"kRUyeZkB/NinYUQJJnENUZNTxq7WWnAEx/1JLT+1\",digest-uri=\"/default\",maxbuf=65536,response=297ac61899fb62f08c225f0c88e51414,qop=auth" auths { method: "TOKEN" mechanism: "DIGEST-MD5" protocol: "" serverId: "default" } 2016-08-20 21:57:07,385 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-33 2016-08-20 21:57:07,385 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Have read input token of size 298 for processing by saslServer.evaluateResponse() 2016-08-20 21:57:07,386 DEBUG [Socket Reader #1 for port 36489] security.BaseNMTokenSecretManager: creating password for appattempt_1471710419543_0001_000001 for user root to run on NM localhost:36489 2016-08-20 21:57:07,386 DEBUG [Socket Reader #1 for port 36489] security.NMTokenIdentifier: Writing NMTokenIdentifier to RPC layer: appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570 2016-08-20 21:57:07,387 DEBUG [Socket Reader #1 for port 36489] security.NMTokenSecretManagerInNM: NMToken password retrieved successfully!! 2016-08-20 21:57:07,388 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: SASL server DIGEST-MD5 callback: setting password for client: appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,388 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: SASL server DIGEST-MD5 callback: setting canonicalized client ID: appattempt_1471710419543_0001_000001 2016-08-20 21:57:07,389 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Will send SUCCESS token of size 40 from saslServer. 2016-08-20 21:57:07,389 DEBUG [Socket Reader #1 for port 36489] ipc.Server: SASL server context established. Negotiated QoP is auth 2016-08-20 21:57:07,389 DEBUG [Socket Reader #1 for port 36489] ipc.Server: SASL server successfully authenticated client: appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,389 INFO [Socket Reader #1 for port 36489] ipc.Server: Auth successful for appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,389 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52624 Call#-33 Retry#-1 2016-08-20 21:57:07,390 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52624 Call#-33 Retry#-1 Wrote 64 bytes. 2016-08-20 21:57:07,390 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,390 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:07,390 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,391 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:07,391 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,391 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:07,391 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:07,391 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,391 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,392 DEBUG [Thread-346] ipc.Client: Negotiated QOP is :auth 2016-08-20 21:57:07,397 DEBUG [IPC Parameter Sending Thread #0] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001 sending #10 2016-08-20 21:57:07,397 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: starting, having connections 4 2016-08-20 21:57:07,401 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-3 2016-08-20 21:57:07,404 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Successfully authorized userInfo { } protocol: "org.apache.hadoop.yarn.api.ContainerManagementProtocolPB" 2016-08-20 21:57:07,404 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #10 2016-08-20 21:57:07,401 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,404 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:07,405 DEBUG [IPC Server handler 0 on 36489] ipc.Server: IPC Server handler 0 on 36489: org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.startContainers from 127.0.0.1:52624 Call#10 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER 2016-08-20 21:57:07,405 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,405 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:46239 of type STATUS_UPDATE 2016-08-20 21:57:07,405 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,405 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:46239 clusterResources: 2016-08-20 21:57:07,406 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:46239 availableResource: 2016-08-20 21:57:07,406 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,406 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,408 DEBUG [IPC Server handler 0 on 36489] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:TOKEN) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419) 2016-08-20 21:57:07,408 DEBUG [IPC Server handler 0 on 36489] lib.MutableRates: signalToContainer 2016-08-20 21:57:07,411 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,411 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 1 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000001, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:07,412 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,412 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type STATUS_UPDATE 2016-08-20 21:57:07,413 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,413 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:43931 clusterResources: 2016-08-20 21:57:07,413 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:43931 availableResource: 2016-08-20 21:57:07,413 DEBUG [IPC Server handler 0 on 36489] security.BaseContainerTokenSecretManager: Retrieving password for container_1471710419543_0001_01_000002 for user container_1471710419543_0001_01_000002 (auth:SIMPLE) to be run on NM localhost:36489 2016-08-20 21:57:07,413 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,413 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,414 DEBUG [IPC Server handler 0 on 36489] security.ContainerTokenIdentifier: Writing ContainerTokenIdentifier to RPC layer: containerId { app_attempt_id { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } id: 2 } nmHostAddr: "localhost:36489" appSubmitter: "root" resource { memory: 1024 virtual_cores: 1 } expiryTimeStamp: 1471711027322 masterKeyId: -991580041 rmIdentifier: 1471710419543 priority { priority: 1 } creationTime: 1471710427212 nodeLabelExpression: "" containerType: TASK executionType: GUARANTEED 2016-08-20 21:57:07,415 DEBUG [IPC Server handler 0 on 36489] security.NMTokenSecretManagerInNM: NMToken key updated for application attempt : appattempt_1471710419543_0001_000001 2016-08-20 21:57:07,416 INFO [IPC Server handler 0 on 36489] containermanager.ContainerManagerImpl: Start request for container_1471710419543_0001_01_000002 by user root 2016-08-20 21:57:07,440 DEBUG [IPC Server handler 0 on 36489] lib.MutableMetricsFactory: field public org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.startTime with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=, always=false, type=DEFAULT, valueName=Time, value=[]) 2016-08-20 21:57:07,440 DEBUG [IPC Server handler 0 on 36489] lib.MutableMetricsFactory: field public org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.finishTime with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=, always=false, type=DEFAULT, valueName=Time, value=[]) 2016-08-20 21:57:07,440 DEBUG [IPC Server handler 0 on 36489] lib.MutableMetricsFactory: field public org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.exitCode with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=, always=false, type=DEFAULT, valueName=Time, value=[]) 2016-08-20 21:57:07,454 DEBUG [IPC Server handler 0 on 36489] impl.MetricsSystemImpl: ContainerResource_container_1471710419543_0001_01_000002, Metrics for container: container_1471710419543_0001_01_000002 2016-08-20 21:57:07,454 DEBUG [IPC Server handler 0 on 36489] impl.MetricsConfig: poking parent 'PropertiesConfiguration' for key: source.source.start_mbeans 2016-08-20 21:57:07,455 DEBUG [IPC Server handler 0 on 36489] impl.MetricsConfig: poking parent 'MetricsConfig' for key: source.start_mbeans 2016-08-20 21:57:07,455 DEBUG [IPC Server handler 0 on 36489] impl.MetricsConfig: poking parent 'PropertiesConfiguration' for key: *.source.start_mbeans 2016-08-20 21:57:07,455 DEBUG [IPC Server handler 0 on 36489] impl.MetricsSourceAdapter: Updating attr cache... 2016-08-20 21:57:07,455 DEBUG [IPC Server handler 0 on 36489] impl.MetricsSourceAdapter: Done. # tags & metrics=0 2016-08-20 21:57:07,455 DEBUG [IPC Server handler 0 on 36489] impl.MetricsSourceAdapter: Updating info cache... 2016-08-20 21:57:07,455 DEBUG [IPC Server handler 0 on 36489] impl.MetricsSystemImpl: [] 2016-08-20 21:57:07,455 DEBUG [IPC Server handler 0 on 36489] impl.MetricsSourceAdapter: Done 2016-08-20 21:57:07,455 DEBUG [IPC Server handler 0 on 36489] util.MBeans: Registered Hadoop:service=NodeManager,name=ContainerResource_container_1471710419543_0001_01_000002 2016-08-20 21:57:07,455 DEBUG [IPC Server handler 0 on 36489] impl.MetricsSourceAdapter: MBean for source ContainerResource_container_1471710419543_0001_01_000002 registered. 2016-08-20 21:57:07,455 DEBUG [IPC Server handler 0 on 36489] impl.MetricsSystemImpl: Registered source ContainerResource_container_1471710419543_0001_01_000002 2016-08-20 21:57:07,456 INFO [IPC Server handler 0 on 36489] containermanager.ContainerManagerImpl: Creating a new application reference for app application_1471710419543_0001 2016-08-20 21:57:07,456 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationInitEvent.EventType: INIT_APPLICATION 2016-08-20 21:57:07,457 DEBUG [AsyncDispatcher event handler] application.ApplicationImpl: Processing application_1471710419543_0001 of type INIT_APPLICATION 2016-08-20 21:57:07,457 INFO [AsyncDispatcher event handler] application.ApplicationImpl: Application application_1471710419543_0001 transitioned from NEW to INITING 2016-08-20 21:57:07,457 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationContainerInitEvent.EventType: INIT_CONTAINER 2016-08-20 21:57:07,457 DEBUG [AsyncDispatcher event handler] application.ApplicationImpl: Processing application_1471710419543_0001 of type INIT_CONTAINER 2016-08-20 21:57:07,457 INFO [AsyncDispatcher event handler] application.ApplicationImpl: Adding container_1471710419543_0001_01_000002 to application application_1471710419543_0001 2016-08-20 21:57:07,458 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.event.LogHandlerAppStartedEvent.EventType: APPLICATION_STARTED 2016-08-20 21:57:07,458 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationEvent.EventType: APPLICATION_LOG_HANDLING_INITED 2016-08-20 21:57:07,458 DEBUG [AsyncDispatcher event handler] application.ApplicationImpl: Processing application_1471710419543_0001 of type APPLICATION_LOG_HANDLING_INITED 2016-08-20 21:57:07,458 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.ApplicationLocalizationEvent.EventType: INIT_APPLICATION_RESOURCES 2016-08-20 21:57:07,458 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationInitedEvent.EventType: APPLICATION_INITED 2016-08-20 21:57:07,458 DEBUG [AsyncDispatcher event handler] application.ApplicationImpl: Processing application_1471710419543_0001 of type APPLICATION_INITED 2016-08-20 21:57:07,458 INFO [AsyncDispatcher event handler] application.ApplicationImpl: Application application_1471710419543_0001 transitioned from INITING to RUNNING 2016-08-20 21:57:07,458 INFO [IPC Server handler 0 on 36489] nodemanager.NMAuditLogger: USER=root IP=127.0.0.1 OPERATION=Start Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1471710419543_0001 CONTAINERID=container_1471710419543_0001_01_000002 2016-08-20 21:57:07,458 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerInitEvent.EventType: INIT_CONTAINER 2016-08-20 21:57:07,458 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000002 of type INIT_CONTAINER 2016-08-20 21:57:07,459 DEBUG [IPC Server handler 0 on 36489] ipc.Server: Served: startContainers queueTime= 4 procesingTime= 51 2016-08-20 21:57:07,460 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000002 transitioned from NEW to LOCALIZED 2016-08-20 21:57:07,462 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEvent.EventType: CONTAINER_INIT 2016-08-20 21:57:07,463 INFO [AsyncDispatcher event handler] containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1471710419543_0001 2016-08-20 21:57:07,463 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEvent.EventType: LAUNCH_CONTAINER 2016-08-20 21:57:07,464 DEBUG [IPC Server handler 0 on 36489] ipc.Server: IPC Server handler 0 on 36489: responding to org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.startContainers from 127.0.0.1:52624 Call#10 Retry#0 2016-08-20 21:57:07,464 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001 got value #10 2016-08-20 21:57:07,464 DEBUG [Thread-346] ipc.ProtobufRpcEngine: Call: startContainers took 87ms 2016-08-20 21:57:07,465 DEBUG [Thread-346] impl.ContainerManagementProtocolProxy: Opening proxy : localhost:36489 2016-08-20 21:57:07,465 DEBUG [IPC Server handler 0 on 36489] ipc.Server: IPC Server handler 0 on 36489: responding to org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.startContainers from 127.0.0.1:52624 Call#10 Retry#0 Wrote 51 bytes. 2016-08-20 21:57:07,465 DEBUG [ContainersLauncher #0] concurrent.HadoopThreadPoolExecutor: beforeExecute in thread: ContainersLauncher #0, runnable type: java.util.concurrent.FutureTask 2016-08-20 21:57:07,465 DEBUG [Thread-346] security.SecurityUtil: Acquired token Kind: NMToken, Service: 127.0.0.1:36489, Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570) 2016-08-20 21:57:07,466 DEBUG [Thread-346] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:SIMPLE) from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(ServerProxy.java:94) 2016-08-20 21:57:07,466 DEBUG [Thread-346] ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ContainerManagementProtocol 2016-08-20 21:57:07,467 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: disconnecting client 127.0.0.1:52624. Number of active connections: 0 2016-08-20 21:57:07,465 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: closed 2016-08-20 21:57:07,467 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: stopped, remaining connections 3 2016-08-20 21:57:07,468 DEBUG [Thread-346] ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@21109725 2016-08-20 21:57:07,474 DEBUG [Thread-346] ipc.Client: The ping interval is 60000 ms. 2016-08-20 21:57:07,474 DEBUG [Thread-346] ipc.Client: Connecting to localhost/127.0.0.1:36489 2016-08-20 21:57:07,475 DEBUG [IPC Server listener on 36489] ipc.Server: Server connection from 127.0.0.1:52626; # active connections: 1; # queued calls: 0 2016-08-20 21:57:07,476 DEBUG [Thread-346] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788) 2016-08-20 21:57:07,476 DEBUG [Thread-346] security.SaslRpcClient: Sending sasl message state: NEGOTIATE 2016-08-20 21:57:07,477 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-33 2016-08-20 21:57:07,477 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: Created SASL server with mechanism = DIGEST-MD5 2016-08-20 21:57:07,477 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52626 Call#-33 Retry#-1 2016-08-20 21:57:07,477 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52626 Call#-33 Retry#-1 Wrote 166 bytes. 2016-08-20 21:57:07,478 DEBUG [Thread-346] security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@f950b33 2016-08-20 21:57:07,479 DEBUG [Thread-346] security.NMTokenSelector: Looking for service: 127.0.0.1:36489. Current token is Kind: NMToken, Service: 127.0.0.1:36489, Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570) 2016-08-20 21:57:07,479 DEBUG [Thread-346] security.SaslRpcClient: Creating SASL DIGEST-MD5(TOKEN) client to authenticate to service at default 2016-08-20 21:57:07,480 DEBUG [Thread-346] security.SaslRpcClient: Use TOKEN authentication for protocol ContainerManagementProtocolPB 2016-08-20 21:57:07,480 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting username: Cg0KCQgBENe0m8bqKhABEg8KCWxvY2FsaG9zdBCJnQIaBHJvb3Qg4oyskQE= 2016-08-20 21:57:07,480 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting userPassword 2016-08-20 21:57:07,480 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting realm: default 2016-08-20 21:57:07,481 DEBUG [Thread-346] security.SaslRpcClient: Sending sasl message state: INITIATE token: "charset=utf-8,username=\"Cg0KCQgBENe0m8bqKhABEg8KCWxvY2FsaG9zdBCJnQIaBHJvb3Qg4oyskQE=\",realm=\"default\",nonce=\"r8t4XGzas1hBLGZtGjzU1MdlSwxNj9M37f+sYLim\",nc=00000001,cnonce=\"X+6XJdGesW29k5MOKX5NI09BX6yIxwnt7KRjsCEN\",digest-uri=\"/default\",maxbuf=65536,response=411d8e0ff86f2ee38210c8baaea8f042,qop=auth" auths { method: "TOKEN" mechanism: "DIGEST-MD5" protocol: "" serverId: "default" } 2016-08-20 21:57:07,483 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-33 2016-08-20 21:57:07,483 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Have read input token of size 298 for processing by saslServer.evaluateResponse() 2016-08-20 21:57:07,484 DEBUG [Socket Reader #1 for port 36489] security.BaseNMTokenSecretManager: creating password for appattempt_1471710419543_0001_000001 for user root to run on NM localhost:36489 2016-08-20 21:57:07,484 DEBUG [Socket Reader #1 for port 36489] security.NMTokenIdentifier: Writing NMTokenIdentifier to RPC layer: appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570 2016-08-20 21:57:07,484 DEBUG [Socket Reader #1 for port 36489] security.NMTokenSecretManagerInNM: NMToken password retrieved successfully!! 2016-08-20 21:57:07,485 DEBUG [ContainersLauncher #0] security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:327) 2016-08-20 21:57:07,485 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: SASL server DIGEST-MD5 callback: setting password for client: appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,486 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: SASL server DIGEST-MD5 callback: setting canonicalized client ID: appattempt_1471710419543_0001_000001 2016-08-20 21:57:07,486 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Will send SUCCESS token of size 40 from saslServer. 2016-08-20 21:57:07,486 DEBUG [Socket Reader #1 for port 36489] ipc.Server: SASL server context established. Negotiated QoP is auth 2016-08-20 21:57:07,487 DEBUG [Socket Reader #1 for port 36489] ipc.Server: SASL server successfully authenticated client: appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,487 INFO [Socket Reader #1 for port 36489] ipc.Server: Auth successful for appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,487 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52626 Call#-33 Retry#-1 2016-08-20 21:57:07,487 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52626 Call#-33 Retry#-1 Wrote 64 bytes. 2016-08-20 21:57:07,490 DEBUG [Thread-346] ipc.Client: Negotiated QOP is :auth 2016-08-20 21:57:07,491 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,491 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 1 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000002, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:07,492 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,492 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:07,497 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Container container_1471710419543_0001_01_000002 is the first container get launched for application application_1471710419543_0001 2016-08-20 21:57:07,497 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,497 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:07,497 DEBUG [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: Processing container_1471710419543_0001_01_000002 of type LAUNCHED 2016-08-20 21:57:07,498 INFO [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: container_1471710419543_0001_01_000002 Container Transitioned from ACQUIRED to RUNNING 2016-08-20 21:57:07,498 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:07,498 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,498 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,499 DEBUG [IPC Parameter Sending Thread #0] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001 sending #11 2016-08-20 21:57:07,499 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: starting, having connections 4 2016-08-20 21:57:07,501 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-3 2016-08-20 21:57:07,501 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Successfully authorized userInfo { } protocol: "org.apache.hadoop.yarn.api.ContainerManagementProtocolPB" 2016-08-20 21:57:07,501 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #11 2016-08-20 21:57:07,501 DEBUG [IPC Server handler 2 on 36489] ipc.Server: IPC Server handler 2 on 36489: org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.getContainerStatuses from 127.0.0.1:52626 Call#11 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER 2016-08-20 21:57:07,504 DEBUG [IPC Server handler 2 on 36489] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:TOKEN) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419) 2016-08-20 21:57:07,505 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,508 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:07,509 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,509 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:46239 of type STATUS_UPDATE 2016-08-20 21:57:07,509 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,509 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:46239 clusterResources: 2016-08-20 21:57:07,509 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:46239 availableResource: 2016-08-20 21:57:07,509 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,510 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,507 INFO [IPC Server handler 2 on 36489] containermanager.ContainerManagerImpl: Getting container-status for container_1471710419543_0001_01_000002 2016-08-20 21:57:07,510 INFO [IPC Server handler 2 on 36489] containermanager.ContainerManagerImpl: Returning ContainerStatus: [ContainerId: container_1471710419543_0001_01_000002, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ] 2016-08-20 21:57:07,512 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,512 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 1 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000001, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:07,513 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,513 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type STATUS_UPDATE 2016-08-20 21:57:07,514 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,514 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:43931 clusterResources: 2016-08-20 21:57:07,514 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:43931 availableResource: 2016-08-20 21:57:07,515 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,515 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,519 DEBUG [IPC Server handler 2 on 36489] ipc.Server: Served: getContainerStatuses queueTime= 4 procesingTime= 14 2016-08-20 21:57:07,520 DEBUG [IPC Server handler 2 on 36489] ipc.Server: IPC Server handler 2 on 36489: responding to org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.getContainerStatuses from 127.0.0.1:52626 Call#11 Retry#0 2016-08-20 21:57:07,521 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001 got value #11 2016-08-20 21:57:07,521 DEBUG [Thread-346] ipc.ProtobufRpcEngine: Call: getContainerStatuses took 48ms 2016-08-20 21:57:07,523 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: disconnecting client 127.0.0.1:52626. Number of active connections: 0 2016-08-20 21:57:07,523 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: closed 2016-08-20 21:57:07,523 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: stopped, remaining connections 3 2016-08-20 21:57:07,524 DEBUG [Thread-346] impl.ContainerManagementProtocolProxy: Opening proxy : localhost:36489 2016-08-20 21:57:07,525 DEBUG [Thread-346] security.SecurityUtil: Acquired token Kind: NMToken, Service: 127.0.0.1:36489, Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570) 2016-08-20 21:57:07,525 DEBUG [IPC Server handler 2 on 36489] ipc.Server: IPC Server handler 2 on 36489: responding to org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.getContainerStatuses from 127.0.0.1:52626 Call#11 Retry#0 Wrote 77 bytes. 2016-08-20 21:57:07,526 DEBUG [Thread-346] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:SIMPLE) from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(ServerProxy.java:94) 2016-08-20 21:57:07,526 DEBUG [Thread-346] ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ContainerManagementProtocol 2016-08-20 21:57:07,526 DEBUG [Thread-346] ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@21109725 2016-08-20 21:57:07,527 DEBUG [Thread-346] ipc.Client: The ping interval is 60000 ms. 2016-08-20 21:57:07,527 DEBUG [Thread-346] ipc.Client: Connecting to localhost/127.0.0.1:36489 2016-08-20 21:57:07,528 DEBUG [IPC Server listener on 36489] ipc.Server: Server connection from 127.0.0.1:52628; # active connections: 1; # queued calls: 0 2016-08-20 21:57:07,529 DEBUG [Thread-346] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788) 2016-08-20 21:57:07,529 DEBUG [Thread-346] security.SaslRpcClient: Sending sasl message state: NEGOTIATE 2016-08-20 21:57:07,529 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-33 2016-08-20 21:57:07,529 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: Created SASL server with mechanism = DIGEST-MD5 2016-08-20 21:57:07,530 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52628 Call#-33 Retry#-1 2016-08-20 21:57:07,530 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52628 Call#-33 Retry#-1 Wrote 166 bytes. 2016-08-20 21:57:07,530 DEBUG [Thread-346] security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@cbc888e 2016-08-20 21:57:07,531 DEBUG [Thread-346] security.NMTokenSelector: Looking for service: 127.0.0.1:36489. Current token is Kind: NMToken, Service: 127.0.0.1:36489, Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570) 2016-08-20 21:57:07,531 DEBUG [Thread-346] security.SaslRpcClient: Creating SASL DIGEST-MD5(TOKEN) client to authenticate to service at default 2016-08-20 21:57:07,532 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEvent.EventType: CONTAINER_LAUNCHED 2016-08-20 21:57:07,532 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000002 of type CONTAINER_LAUNCHED 2016-08-20 21:57:07,532 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000002 transitioned from LOCALIZED to RUNNING 2016-08-20 21:57:07,532 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerStartMonitoringEvent.EventType: START_MONITORING_CONTAINER 2016-08-20 21:57:07,532 DEBUG [Thread-346] security.SaslRpcClient: Use TOKEN authentication for protocol ContainerManagementProtocolPB 2016-08-20 21:57:07,535 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting username: Cg0KCQgBENe0m8bqKhABEg8KCWxvY2FsaG9zdBCJnQIaBHJvb3Qg4oyskQE= 2016-08-20 21:57:07,535 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting userPassword 2016-08-20 21:57:07,535 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting realm: default 2016-08-20 21:57:07,536 DEBUG [Thread-346] security.SaslRpcClient: Sending sasl message state: INITIATE token: "charset=utf-8,username=\"Cg0KCQgBENe0m8bqKhABEg8KCWxvY2FsaG9zdBCJnQIaBHJvb3Qg4oyskQE=\",realm=\"default\",nonce=\"hdeSQIuYwcYRGlen4gs40r3d7p9c+EGqxfFE9bnn\",nc=00000001,cnonce=\"U+q+/ihOaqNTm+9iJqsPlaFscSoe8LL+5JWhGidF\",digest-uri=\"/default\",maxbuf=65536,response=82044add66a34e7bf922b31ea5c0652d,qop=auth" auths { method: "TOKEN" mechanism: "DIGEST-MD5" protocol: "" serverId: "default" } 2016-08-20 21:57:07,537 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-33 2016-08-20 21:57:07,537 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Have read input token of size 298 for processing by saslServer.evaluateResponse() 2016-08-20 21:57:07,538 DEBUG [Socket Reader #1 for port 36489] security.BaseNMTokenSecretManager: creating password for appattempt_1471710419543_0001_000001 for user root to run on NM localhost:36489 2016-08-20 21:57:07,543 DEBUG [Socket Reader #1 for port 36489] security.NMTokenIdentifier: Writing NMTokenIdentifier to RPC layer: appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570 2016-08-20 21:57:07,544 DEBUG [Socket Reader #1 for port 36489] security.NMTokenSecretManagerInNM: NMToken password retrieved successfully!! 2016-08-20 21:57:07,544 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: SASL server DIGEST-MD5 callback: setting password for client: appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,545 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: SASL server DIGEST-MD5 callback: setting canonicalized client ID: appattempt_1471710419543_0001_000001 2016-08-20 21:57:07,546 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Will send SUCCESS token of size 40 from saslServer. 2016-08-20 21:57:07,546 DEBUG [Socket Reader #1 for port 36489] ipc.Server: SASL server context established. Negotiated QoP is auth 2016-08-20 21:57:07,546 DEBUG [Socket Reader #1 for port 36489] ipc.Server: SASL server successfully authenticated client: appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,546 INFO [Socket Reader #1 for port 36489] ipc.Server: Auth successful for appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,547 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52628 Call#-33 Retry#-1 2016-08-20 21:57:07,547 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52628 Call#-33 Retry#-1 Wrote 64 bytes. 2016-08-20 21:57:07,550 INFO [ContainersLauncher #0] nodemanager.DefaultContainerExecutor: launchContainer: [nice, -n, 0, bash, /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/usercache/root/appcache/application_1471710419543_0001/container_1471710419543_0001_01_000002/default_container_executor.sh] 2016-08-20 21:57:07,547 DEBUG [Thread-346] ipc.Client: Negotiated QOP is :auth 2016-08-20 21:57:07,554 DEBUG [IPC Parameter Sending Thread #0] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001 sending #12 2016-08-20 21:57:07,566 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: starting, having connections 4 2016-08-20 21:57:07,567 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-3 2016-08-20 21:57:07,567 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Successfully authorized userInfo { } protocol: "org.apache.hadoop.yarn.api.ContainerManagementProtocolPB" 2016-08-20 21:57:07,567 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #12 2016-08-20 21:57:07,568 DEBUG [IPC Server handler 1 on 36489] ipc.Server: IPC Server handler 1 on 36489: org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.startContainers from 127.0.0.1:52628 Call#12 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER 2016-08-20 21:57:07,569 DEBUG [IPC Server handler 1 on 36489] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:TOKEN) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419) 2016-08-20 21:57:07,572 DEBUG [IPC Server handler 1 on 36489] security.BaseContainerTokenSecretManager: Retrieving password for container_1471710419543_0001_01_000003 for user container_1471710419543_0001_01_000003 (auth:SIMPLE) to be run on NM localhost:36489 2016-08-20 21:57:07,573 DEBUG [IPC Server handler 1 on 36489] security.ContainerTokenIdentifier: Writing ContainerTokenIdentifier to RPC layer: containerId { app_attempt_id { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } id: 3 } nmHostAddr: "localhost:36489" appSubmitter: "root" resource { memory: 1024 virtual_cores: 1 } expiryTimeStamp: 1471711027333 masterKeyId: -991580041 rmIdentifier: 1471710419543 priority { priority: 1 } creationTime: 1471710427234 nodeLabelExpression: "" containerType: TASK executionType: GUARANTEED 2016-08-20 21:57:07,573 INFO [IPC Server handler 1 on 36489] containermanager.ContainerManagerImpl: Start request for container_1471710419543_0001_01_000003 by user root 2016-08-20 21:57:07,577 DEBUG [IPC Server handler 1 on 36489] lib.MutableMetricsFactory: field public org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.startTime with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=, always=false, type=DEFAULT, valueName=Time, value=[]) 2016-08-20 21:57:07,578 DEBUG [IPC Server handler 1 on 36489] lib.MutableMetricsFactory: field public org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.finishTime with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=, always=false, type=DEFAULT, valueName=Time, value=[]) 2016-08-20 21:57:07,579 DEBUG [IPC Server handler 1 on 36489] lib.MutableMetricsFactory: field public org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.exitCode with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=, always=false, type=DEFAULT, valueName=Time, value=[]) 2016-08-20 21:57:07,586 DEBUG [IPC Server handler 1 on 36489] impl.MetricsSystemImpl: ContainerResource_container_1471710419543_0001_01_000003, Metrics for container: container_1471710419543_0001_01_000003 2016-08-20 21:57:07,586 DEBUG [IPC Server handler 1 on 36489] impl.MetricsConfig: poking parent 'PropertiesConfiguration' for key: source.source.start_mbeans 2016-08-20 21:57:07,586 DEBUG [IPC Server handler 1 on 36489] impl.MetricsConfig: poking parent 'MetricsConfig' for key: source.start_mbeans 2016-08-20 21:57:07,586 DEBUG [IPC Server handler 1 on 36489] impl.MetricsConfig: poking parent 'PropertiesConfiguration' for key: *.source.start_mbeans 2016-08-20 21:57:07,586 DEBUG [IPC Server handler 1 on 36489] impl.MetricsSourceAdapter: Updating attr cache... 2016-08-20 21:57:07,586 DEBUG [IPC Server handler 1 on 36489] impl.MetricsSourceAdapter: Done. # tags & metrics=0 2016-08-20 21:57:07,586 DEBUG [IPC Server handler 1 on 36489] impl.MetricsSourceAdapter: Updating info cache... 2016-08-20 21:57:07,586 DEBUG [IPC Server handler 1 on 36489] impl.MetricsSystemImpl: [] 2016-08-20 21:57:07,586 DEBUG [IPC Server handler 1 on 36489] impl.MetricsSourceAdapter: Done 2016-08-20 21:57:07,586 DEBUG [IPC Server handler 1 on 36489] util.MBeans: Registered Hadoop:service=NodeManager,name=ContainerResource_container_1471710419543_0001_01_000003 2016-08-20 21:57:07,587 DEBUG [IPC Server handler 1 on 36489] impl.MetricsSourceAdapter: MBean for source ContainerResource_container_1471710419543_0001_01_000003 registered. 2016-08-20 21:57:07,587 DEBUG [IPC Server handler 1 on 36489] impl.MetricsSystemImpl: Registered source ContainerResource_container_1471710419543_0001_01_000003 2016-08-20 21:57:07,587 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationContainerInitEvent.EventType: INIT_CONTAINER 2016-08-20 21:57:07,588 DEBUG [AsyncDispatcher event handler] application.ApplicationImpl: Processing application_1471710419543_0001 of type INIT_CONTAINER 2016-08-20 21:57:07,588 INFO [AsyncDispatcher event handler] application.ApplicationImpl: Adding container_1471710419543_0001_01_000003 to application application_1471710419543_0001 2016-08-20 21:57:07,588 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerInitEvent.EventType: INIT_CONTAINER 2016-08-20 21:57:07,589 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000003 of type INIT_CONTAINER 2016-08-20 21:57:07,589 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000003 transitioned from NEW to LOCALIZED 2016-08-20 21:57:07,589 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEvent.EventType: CONTAINER_INIT 2016-08-20 21:57:07,589 INFO [AsyncDispatcher event handler] containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1471710419543_0001 2016-08-20 21:57:07,589 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEvent.EventType: LAUNCH_CONTAINER 2016-08-20 21:57:07,587 INFO [IPC Server handler 1 on 36489] nodemanager.NMAuditLogger: USER=root IP=127.0.0.1 OPERATION=Start Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1471710419543_0001 CONTAINERID=container_1471710419543_0001_01_000003 2016-08-20 21:57:07,589 DEBUG [IPC Server handler 1 on 36489] ipc.Server: Served: startContainers queueTime= 2 procesingTime= 20 2016-08-20 21:57:07,594 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,594 DEBUG [IPC Server handler 1 on 36489] ipc.Server: IPC Server handler 1 on 36489: responding to org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.startContainers from 127.0.0.1:52628 Call#12 Retry#0 2016-08-20 21:57:07,594 DEBUG [IPC Server handler 1 on 36489] ipc.Server: IPC Server handler 1 on 36489: responding to org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.startContainers from 127.0.0.1:52628 Call#12 Retry#0 Wrote 51 bytes. 2016-08-20 21:57:07,594 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001 got value #12 2016-08-20 21:57:07,594 DEBUG [ContainersLauncher #1] concurrent.HadoopThreadPoolExecutor: beforeExecute in thread: ContainersLauncher #1, runnable type: java.util.concurrent.FutureTask 2016-08-20 21:57:07,596 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 2 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000002, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000003, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:07,595 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: disconnecting client 127.0.0.1:52628. Number of active connections: 0 2016-08-20 21:57:07,595 DEBUG [Thread-346] ipc.ProtobufRpcEngine: Call: startContainers took 68ms 2016-08-20 21:57:07,597 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,597 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:07,597 DEBUG [Thread-346] impl.ContainerManagementProtocolProxy: Opening proxy : localhost:36489 2016-08-20 21:57:07,598 DEBUG [Thread-346] security.SecurityUtil: Acquired token Kind: NMToken, Service: 127.0.0.1:36489, Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570) 2016-08-20 21:57:07,599 DEBUG [Thread-346] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:SIMPLE) from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(ServerProxy.java:94) 2016-08-20 21:57:07,599 DEBUG [Thread-346] ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ContainerManagementProtocol 2016-08-20 21:57:07,599 DEBUG [Thread-346] ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@21109725 2016-08-20 21:57:07,600 DEBUG [Thread-346] ipc.Client: The ping interval is 60000 ms. 2016-08-20 21:57:07,600 DEBUG [Thread-346] ipc.Client: Connecting to localhost/127.0.0.1:36489 2016-08-20 21:57:07,601 DEBUG [IPC Server listener on 36489] ipc.Server: Server connection from 127.0.0.1:52630; # active connections: 1; # queued calls: 0 2016-08-20 21:57:07,601 DEBUG [Thread-346] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788) 2016-08-20 21:57:07,601 DEBUG [Thread-346] security.SaslRpcClient: Sending sasl message state: NEGOTIATE 2016-08-20 21:57:07,595 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: closed 2016-08-20 21:57:07,602 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-33 2016-08-20 21:57:07,602 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: stopped, remaining connections 4 2016-08-20 21:57:07,603 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: Created SASL server with mechanism = DIGEST-MD5 2016-08-20 21:57:07,605 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52630 Call#-33 Retry#-1 2016-08-20 21:57:07,605 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52630 Call#-33 Retry#-1 Wrote 166 bytes. 2016-08-20 21:57:07,604 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,606 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:07,606 DEBUG [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: Processing container_1471710419543_0001_01_000003 of type LAUNCHED 2016-08-20 21:57:07,606 INFO [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: container_1471710419543_0001_01_000003 Container Transitioned from ACQUIRED to RUNNING 2016-08-20 21:57:07,606 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:07,606 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,606 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,607 DEBUG [Thread-346] security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@1bffb4ae 2016-08-20 21:57:07,608 DEBUG [Thread-346] security.NMTokenSelector: Looking for service: 127.0.0.1:36489. Current token is Kind: NMToken, Service: 127.0.0.1:36489, Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570) 2016-08-20 21:57:07,608 DEBUG [Thread-346] security.SaslRpcClient: Creating SASL DIGEST-MD5(TOKEN) client to authenticate to service at default 2016-08-20 21:57:07,609 DEBUG [Thread-346] security.SaslRpcClient: Use TOKEN authentication for protocol ContainerManagementProtocolPB 2016-08-20 21:57:07,609 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,609 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:07,609 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting username: Cg0KCQgBENe0m8bqKhABEg8KCWxvY2FsaG9zdBCJnQIaBHJvb3Qg4oyskQE= 2016-08-20 21:57:07,609 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting userPassword 2016-08-20 21:57:07,609 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting realm: default 2016-08-20 21:57:07,610 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,610 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:46239 of type STATUS_UPDATE 2016-08-20 21:57:07,610 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,610 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:46239 clusterResources: 2016-08-20 21:57:07,610 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:46239 availableResource: 2016-08-20 21:57:07,610 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,610 DEBUG [Thread-346] security.SaslRpcClient: Sending sasl message state: INITIATE token: "charset=utf-8,username=\"Cg0KCQgBENe0m8bqKhABEg8KCWxvY2FsaG9zdBCJnQIaBHJvb3Qg4oyskQE=\",realm=\"default\",nonce=\"lLbVBJENyLHMlCMpGGxhyN77z98kgzc0N51Maynu\",nc=00000001,cnonce=\"kNci3nMEc0C5GwAeoTiTee1CcyOd8QS1nmP9ZwCd\",digest-uri=\"/default\",maxbuf=65536,response=1c19991eb02f673f2ba47b7229f3fcfa,qop=auth" auths { method: "TOKEN" mechanism: "DIGEST-MD5" protocol: "" serverId: "default" } 2016-08-20 21:57:07,610 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,611 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-33 2016-08-20 21:57:07,611 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Have read input token of size 298 for processing by saslServer.evaluateResponse() 2016-08-20 21:57:07,611 DEBUG [Socket Reader #1 for port 36489] security.BaseNMTokenSecretManager: creating password for appattempt_1471710419543_0001_000001 for user root to run on NM localhost:36489 2016-08-20 21:57:07,612 DEBUG [Socket Reader #1 for port 36489] security.NMTokenIdentifier: Writing NMTokenIdentifier to RPC layer: appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570 2016-08-20 21:57:07,612 DEBUG [ContainersLauncher #1] security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:327) 2016-08-20 21:57:07,612 DEBUG [Socket Reader #1 for port 36489] security.NMTokenSecretManagerInNM: NMToken password retrieved successfully!! 2016-08-20 21:57:07,613 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: SASL server DIGEST-MD5 callback: setting password for client: appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,613 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,613 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 1 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000001, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:07,613 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: SASL server DIGEST-MD5 callback: setting canonicalized client ID: appattempt_1471710419543_0001_000001 2016-08-20 21:57:07,613 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,614 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type STATUS_UPDATE 2016-08-20 21:57:07,614 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Will send SUCCESS token of size 40 from saslServer. 2016-08-20 21:57:07,616 DEBUG [Socket Reader #1 for port 36489] ipc.Server: SASL server context established. Negotiated QoP is auth 2016-08-20 21:57:07,616 DEBUG [Socket Reader #1 for port 36489] ipc.Server: SASL server successfully authenticated client: appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,616 INFO [Socket Reader #1 for port 36489] ipc.Server: Auth successful for appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,617 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52630 Call#-33 Retry#-1 2016-08-20 21:57:07,617 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52630 Call#-33 Retry#-1 Wrote 64 bytes. 2016-08-20 21:57:07,615 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,617 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:43931 clusterResources: 2016-08-20 21:57:07,617 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:43931 availableResource: 2016-08-20 21:57:07,617 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,617 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,618 DEBUG [Thread-346] ipc.Client: Negotiated QOP is :auth 2016-08-20 21:57:07,625 DEBUG [IPC Parameter Sending Thread #0] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001 sending #13 2016-08-20 21:57:07,625 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: starting, having connections 4 2016-08-20 21:57:07,626 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-3 2016-08-20 21:57:07,626 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Successfully authorized userInfo { } protocol: "org.apache.hadoop.yarn.api.ContainerManagementProtocolPB" 2016-08-20 21:57:07,627 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #13 2016-08-20 21:57:07,627 DEBUG [IPC Server handler 3 on 36489] ipc.Server: IPC Server handler 3 on 36489: org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.getContainerStatuses from 127.0.0.1:52630 Call#13 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER 2016-08-20 21:57:07,631 DEBUG [IPC Server handler 3 on 36489] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:TOKEN) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419) 2016-08-20 21:57:07,632 INFO [IPC Server handler 3 on 36489] containermanager.ContainerManagerImpl: Getting container-status for container_1471710419543_0001_01_000003 2016-08-20 21:57:07,632 INFO [IPC Server handler 3 on 36489] containermanager.ContainerManagerImpl: Returning ContainerStatus: [ContainerId: container_1471710419543_0001_01_000003, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ] 2016-08-20 21:57:07,632 DEBUG [IPC Server handler 3 on 36489] ipc.Server: Served: getContainerStatuses queueTime= 4 procesingTime= 1 2016-08-20 21:57:07,635 DEBUG [IPC Server handler 3 on 36489] ipc.Server: IPC Server handler 3 on 36489: responding to org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.getContainerStatuses from 127.0.0.1:52630 Call#13 Retry#0 2016-08-20 21:57:07,635 DEBUG [IPC Server handler 3 on 36489] ipc.Server: IPC Server handler 3 on 36489: responding to org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.getContainerStatuses from 127.0.0.1:52630 Call#13 Retry#0 Wrote 77 bytes. 2016-08-20 21:57:07,635 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001 got value #13 2016-08-20 21:57:07,635 DEBUG [Thread-346] ipc.ProtobufRpcEngine: Call: getContainerStatuses took 35ms 2016-08-20 21:57:07,635 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: disconnecting client 127.0.0.1:52630. Number of active connections: 0 2016-08-20 21:57:07,636 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: closed 2016-08-20 21:57:07,636 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: stopped, remaining connections 3 2016-08-20 21:57:07,636 DEBUG [Thread-346] impl.ContainerManagementProtocolProxy: Opening proxy : localhost:36489 2016-08-20 21:57:07,636 DEBUG [Thread-346] security.SecurityUtil: Acquired token Kind: NMToken, Service: 127.0.0.1:36489, Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570) 2016-08-20 21:57:07,637 DEBUG [Thread-346] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:SIMPLE) from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(ServerProxy.java:94) 2016-08-20 21:57:07,638 DEBUG [Thread-346] ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ContainerManagementProtocol 2016-08-20 21:57:07,638 DEBUG [Thread-346] ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@21109725 2016-08-20 21:57:07,639 DEBUG [Thread-346] ipc.Client: The ping interval is 60000 ms. 2016-08-20 21:57:07,639 DEBUG [Thread-346] ipc.Client: Connecting to localhost/127.0.0.1:36489 2016-08-20 21:57:07,639 DEBUG [IPC Server listener on 36489] ipc.Server: Server connection from 127.0.0.1:52632; # active connections: 1; # queued calls: 0 2016-08-20 21:57:07,640 DEBUG [Thread-346] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788) 2016-08-20 21:57:07,641 DEBUG [Thread-346] security.SaslRpcClient: Sending sasl message state: NEGOTIATE 2016-08-20 21:57:07,641 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-33 2016-08-20 21:57:07,642 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: Created SASL server with mechanism = DIGEST-MD5 2016-08-20 21:57:07,642 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52632 Call#-33 Retry#-1 2016-08-20 21:57:07,642 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52632 Call#-33 Retry#-1 Wrote 166 bytes. 2016-08-20 21:57:07,642 DEBUG [Thread-346] security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@424f8f42 2016-08-20 21:57:07,643 DEBUG [Thread-346] security.NMTokenSelector: Looking for service: 127.0.0.1:36489. Current token is Kind: NMToken, Service: 127.0.0.1:36489, Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570) 2016-08-20 21:57:07,643 DEBUG [Thread-346] security.SaslRpcClient: Creating SASL DIGEST-MD5(TOKEN) client to authenticate to service at default 2016-08-20 21:57:07,644 DEBUG [Thread-346] security.SaslRpcClient: Use TOKEN authentication for protocol ContainerManagementProtocolPB 2016-08-20 21:57:07,645 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting username: Cg0KCQgBENe0m8bqKhABEg8KCWxvY2FsaG9zdBCJnQIaBHJvb3Qg4oyskQE= 2016-08-20 21:57:07,645 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting userPassword 2016-08-20 21:57:07,645 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting realm: default 2016-08-20 21:57:07,646 DEBUG [Thread-346] security.SaslRpcClient: Sending sasl message state: INITIATE token: "charset=utf-8,username=\"Cg0KCQgBENe0m8bqKhABEg8KCWxvY2FsaG9zdBCJnQIaBHJvb3Qg4oyskQE=\",realm=\"default\",nonce=\"7y8pRY4ero0WB4wzOgEfFdkLZ+gWTSr6xyrdoTgM\",nc=00000001,cnonce=\"yj0sI1DF4KdSArWM33uN21fXMH9VixYXLF8BsbQO\",digest-uri=\"/default\",maxbuf=65536,response=61e243c2100afc615d8ffbe846c8288f,qop=auth" auths { method: "TOKEN" mechanism: "DIGEST-MD5" protocol: "" serverId: "default" } 2016-08-20 21:57:07,651 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-33 2016-08-20 21:57:07,651 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Have read input token of size 298 for processing by saslServer.evaluateResponse() 2016-08-20 21:57:07,652 DEBUG [Socket Reader #1 for port 36489] security.BaseNMTokenSecretManager: creating password for appattempt_1471710419543_0001_000001 for user root to run on NM localhost:36489 2016-08-20 21:57:07,652 DEBUG [Socket Reader #1 for port 36489] security.NMTokenIdentifier: Writing NMTokenIdentifier to RPC layer: appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570 2016-08-20 21:57:07,652 DEBUG [Socket Reader #1 for port 36489] security.NMTokenSecretManagerInNM: NMToken password retrieved successfully!! 2016-08-20 21:57:07,652 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: SASL server DIGEST-MD5 callback: setting password for client: appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,653 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: SASL server DIGEST-MD5 callback: setting canonicalized client ID: appattempt_1471710419543_0001_000001 2016-08-20 21:57:07,653 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Will send SUCCESS token of size 40 from saslServer. 2016-08-20 21:57:07,653 DEBUG [Socket Reader #1 for port 36489] ipc.Server: SASL server context established. Negotiated QoP is auth 2016-08-20 21:57:07,654 DEBUG [Socket Reader #1 for port 36489] ipc.Server: SASL server successfully authenticated client: appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,654 INFO [Socket Reader #1 for port 36489] ipc.Server: Auth successful for appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,654 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52632 Call#-33 Retry#-1 2016-08-20 21:57:07,654 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52632 Call#-33 Retry#-1 Wrote 64 bytes. 2016-08-20 21:57:07,656 DEBUG [Thread-346] ipc.Client: Negotiated QOP is :auth 2016-08-20 21:57:07,664 DEBUG [IPC Parameter Sending Thread #0] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001 sending #14 2016-08-20 21:57:07,665 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: starting, having connections 4 2016-08-20 21:57:07,667 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-3 2016-08-20 21:57:07,667 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Successfully authorized userInfo { } protocol: "org.apache.hadoop.yarn.api.ContainerManagementProtocolPB" 2016-08-20 21:57:07,667 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #14 2016-08-20 21:57:07,667 DEBUG [IPC Server handler 4 on 36489] ipc.Server: IPC Server handler 4 on 36489: org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.startContainers from 127.0.0.1:52632 Call#14 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER 2016-08-20 21:57:07,669 DEBUG [IPC Server handler 4 on 36489] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:TOKEN) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419) 2016-08-20 21:57:07,672 DEBUG [IPC Server handler 4 on 36489] security.BaseContainerTokenSecretManager: Retrieving password for container_1471710419543_0001_01_000004 for user container_1471710419543_0001_01_000004 (auth:SIMPLE) to be run on NM localhost:36489 2016-08-20 21:57:07,673 DEBUG [IPC Server handler 4 on 36489] security.ContainerTokenIdentifier: Writing ContainerTokenIdentifier to RPC layer: containerId { app_attempt_id { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } id: 4 } nmHostAddr: "localhost:36489" appSubmitter: "root" resource { memory: 1024 virtual_cores: 1 } expiryTimeStamp: 1471711027345 masterKeyId: -991580041 rmIdentifier: 1471710419543 priority { priority: 1 } creationTime: 1471710427269 nodeLabelExpression: "" containerType: TASK executionType: GUARANTEED 2016-08-20 21:57:07,673 INFO [IPC Server handler 4 on 36489] containermanager.ContainerManagerImpl: Start request for container_1471710419543_0001_01_000004 by user root 2016-08-20 21:57:07,675 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEvent.EventType: CONTAINER_LAUNCHED 2016-08-20 21:57:07,677 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000003 of type CONTAINER_LAUNCHED 2016-08-20 21:57:07,677 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000003 transitioned from LOCALIZED to RUNNING 2016-08-20 21:57:07,677 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerStartMonitoringEvent.EventType: START_MONITORING_CONTAINER 2016-08-20 21:57:07,677 DEBUG [IPC Server handler 4 on 36489] lib.MutableMetricsFactory: field public org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.startTime with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=, always=false, type=DEFAULT, valueName=Time, value=[]) 2016-08-20 21:57:07,677 DEBUG [IPC Server handler 4 on 36489] lib.MutableMetricsFactory: field public org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.finishTime with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=, always=false, type=DEFAULT, valueName=Time, value=[]) 2016-08-20 21:57:07,677 DEBUG [IPC Server handler 4 on 36489] lib.MutableMetricsFactory: field public org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.exitCode with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=, always=false, type=DEFAULT, valueName=Time, value=[]) 2016-08-20 21:57:07,678 DEBUG [IPC Server handler 4 on 36489] impl.MetricsSystemImpl: ContainerResource_container_1471710419543_0001_01_000004, Metrics for container: container_1471710419543_0001_01_000004 2016-08-20 21:57:07,678 DEBUG [IPC Server handler 4 on 36489] impl.MetricsConfig: poking parent 'PropertiesConfiguration' for key: source.source.start_mbeans 2016-08-20 21:57:07,678 DEBUG [IPC Server handler 4 on 36489] impl.MetricsConfig: poking parent 'MetricsConfig' for key: source.start_mbeans 2016-08-20 21:57:07,678 DEBUG [IPC Server handler 4 on 36489] impl.MetricsConfig: poking parent 'PropertiesConfiguration' for key: *.source.start_mbeans 2016-08-20 21:57:07,678 DEBUG [IPC Server handler 4 on 36489] impl.MetricsSourceAdapter: Updating attr cache... 2016-08-20 21:57:07,678 DEBUG [IPC Server handler 4 on 36489] impl.MetricsSourceAdapter: Done. # tags & metrics=0 2016-08-20 21:57:07,678 DEBUG [IPC Server handler 4 on 36489] impl.MetricsSourceAdapter: Updating info cache... 2016-08-20 21:57:07,678 DEBUG [IPC Server handler 4 on 36489] impl.MetricsSystemImpl: [] 2016-08-20 21:57:07,678 DEBUG [IPC Server handler 4 on 36489] impl.MetricsSourceAdapter: Done 2016-08-20 21:57:07,678 DEBUG [IPC Server handler 4 on 36489] util.MBeans: Registered Hadoop:service=NodeManager,name=ContainerResource_container_1471710419543_0001_01_000004 2016-08-20 21:57:07,678 DEBUG [IPC Server handler 4 on 36489] impl.MetricsSourceAdapter: MBean for source ContainerResource_container_1471710419543_0001_01_000004 registered. 2016-08-20 21:57:07,678 DEBUG [IPC Server handler 4 on 36489] impl.MetricsSystemImpl: Registered source ContainerResource_container_1471710419543_0001_01_000004 2016-08-20 21:57:07,679 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationContainerInitEvent.EventType: INIT_CONTAINER 2016-08-20 21:57:07,679 INFO [IPC Server handler 4 on 36489] nodemanager.NMAuditLogger: USER=root IP=127.0.0.1 OPERATION=Start Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1471710419543_0001 CONTAINERID=container_1471710419543_0001_01_000004 2016-08-20 21:57:07,679 DEBUG [AsyncDispatcher event handler] application.ApplicationImpl: Processing application_1471710419543_0001 of type INIT_CONTAINER 2016-08-20 21:57:07,679 INFO [AsyncDispatcher event handler] application.ApplicationImpl: Adding container_1471710419543_0001_01_000004 to application application_1471710419543_0001 2016-08-20 21:57:07,679 DEBUG [IPC Server handler 4 on 36489] ipc.Server: Served: startContainers queueTime= 2 procesingTime= 10 2016-08-20 21:57:07,679 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerInitEvent.EventType: INIT_CONTAINER 2016-08-20 21:57:07,679 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000004 of type INIT_CONTAINER 2016-08-20 21:57:07,679 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000004 transitioned from NEW to LOCALIZED 2016-08-20 21:57:07,679 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEvent.EventType: CONTAINER_INIT 2016-08-20 21:57:07,679 INFO [AsyncDispatcher event handler] containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1471710419543_0001 2016-08-20 21:57:07,679 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEvent.EventType: LAUNCH_CONTAINER 2016-08-20 21:57:07,685 DEBUG [ContainersLauncher #2] concurrent.HadoopThreadPoolExecutor: beforeExecute in thread: ContainersLauncher #2, runnable type: java.util.concurrent.FutureTask 2016-08-20 21:57:07,685 DEBUG [IPC Server handler 4 on 36489] ipc.Server: IPC Server handler 4 on 36489: responding to org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.startContainers from 127.0.0.1:52632 Call#14 Retry#0 2016-08-20 21:57:07,686 DEBUG [IPC Server handler 4 on 36489] ipc.Server: IPC Server handler 4 on 36489: responding to org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.startContainers from 127.0.0.1:52632 Call#14 Retry#0 Wrote 51 bytes. 2016-08-20 21:57:07,687 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001 got value #14 2016-08-20 21:57:07,687 DEBUG [Thread-346] ipc.ProtobufRpcEngine: Call: startContainers took 49ms 2016-08-20 21:57:07,687 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: closed 2016-08-20 21:57:07,687 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: stopped, remaining connections 3 2016-08-20 21:57:07,687 DEBUG [Thread-346] impl.ContainerManagementProtocolProxy: Opening proxy : localhost:36489 2016-08-20 21:57:07,688 DEBUG [ContainersLauncher #2] security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:327) 2016-08-20 21:57:07,688 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: disconnecting client 127.0.0.1:52632. Number of active connections: 0 2016-08-20 21:57:07,688 DEBUG [Thread-346] security.SecurityUtil: Acquired token Kind: NMToken, Service: 127.0.0.1:36489, Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570) 2016-08-20 21:57:07,689 DEBUG [Thread-346] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:SIMPLE) from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(ServerProxy.java:94) 2016-08-20 21:57:07,689 DEBUG [Thread-346] ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ContainerManagementProtocol 2016-08-20 21:57:07,690 DEBUG [Thread-346] ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@21109725 2016-08-20 21:57:07,690 DEBUG [Thread-346] ipc.Client: The ping interval is 60000 ms. 2016-08-20 21:57:07,690 DEBUG [Thread-346] ipc.Client: Connecting to localhost/127.0.0.1:36489 2016-08-20 21:57:07,691 DEBUG [IPC Server listener on 36489] ipc.Server: Server connection from 127.0.0.1:52634; # active connections: 1; # queued calls: 0 2016-08-20 21:57:07,691 DEBUG [Thread-346] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788) 2016-08-20 21:57:07,692 DEBUG [Thread-346] security.SaslRpcClient: Sending sasl message state: NEGOTIATE 2016-08-20 21:57:07,692 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-33 2016-08-20 21:57:07,697 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,697 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: Created SASL server with mechanism = DIGEST-MD5 2016-08-20 21:57:07,698 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52634 Call#-33 Retry#-1 2016-08-20 21:57:07,698 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52634 Call#-33 Retry#-1 Wrote 166 bytes. 2016-08-20 21:57:07,698 DEBUG [Thread-346] security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@340c728c 2016-08-20 21:57:07,698 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 3 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000002, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000003, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000004, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:07,699 DEBUG [Thread-346] security.NMTokenSelector: Looking for service: 127.0.0.1:36489. Current token is Kind: NMToken, Service: 127.0.0.1:36489, Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570) 2016-08-20 21:57:07,699 DEBUG [Thread-346] security.SaslRpcClient: Creating SASL DIGEST-MD5(TOKEN) client to authenticate to service at default 2016-08-20 21:57:07,699 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,703 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:07,701 DEBUG [Thread-346] security.SaslRpcClient: Use TOKEN authentication for protocol ContainerManagementProtocolPB 2016-08-20 21:57:07,705 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting username: Cg0KCQgBENe0m8bqKhABEg8KCWxvY2FsaG9zdBCJnQIaBHJvb3Qg4oyskQE= 2016-08-20 21:57:07,705 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting userPassword 2016-08-20 21:57:07,705 DEBUG [Thread-346] security.SaslRpcClient: SASL client callback: setting realm: default 2016-08-20 21:57:07,706 DEBUG [Thread-346] security.SaslRpcClient: Sending sasl message state: INITIATE token: "charset=utf-8,username=\"Cg0KCQgBENe0m8bqKhABEg8KCWxvY2FsaG9zdBCJnQIaBHJvb3Qg4oyskQE=\",realm=\"default\",nonce=\"jdfy5tnnMDosYDuUFXovAM8/4GSX3fahrtM3hkDw\",nc=00000001,cnonce=\"K1rFQsqgT20aEYUcYvMALCJrSrnAKOJXBxLs3iaB\",digest-uri=\"/default\",maxbuf=65536,response=1b55267b0b5708de6ff6f4f78843779b,qop=auth" auths { method: "TOKEN" mechanism: "DIGEST-MD5" protocol: "" serverId: "default" } 2016-08-20 21:57:07,707 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-33 2016-08-20 21:57:07,707 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Have read input token of size 298 for processing by saslServer.evaluateResponse() 2016-08-20 21:57:07,709 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,709 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:07,709 DEBUG [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: Processing container_1471710419543_0001_01_000004 of type LAUNCHED 2016-08-20 21:57:07,709 INFO [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: container_1471710419543_0001_01_000004 Container Transitioned from ACQUIRED to RUNNING 2016-08-20 21:57:07,709 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:07,710 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,710 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,710 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,710 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:07,710 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,710 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:46239 of type STATUS_UPDATE 2016-08-20 21:57:07,710 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,710 DEBUG [Socket Reader #1 for port 36489] security.BaseNMTokenSecretManager: creating password for appattempt_1471710419543_0001_000001 for user root to run on NM localhost:36489 2016-08-20 21:57:07,710 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:46239 clusterResources: 2016-08-20 21:57:07,711 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:46239 availableResource: 2016-08-20 21:57:07,711 DEBUG [Socket Reader #1 for port 36489] security.NMTokenIdentifier: Writing NMTokenIdentifier to RPC layer: appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 36489 } appSubmitter: "root" keyId: 304809570 2016-08-20 21:57:07,711 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,711 DEBUG [Socket Reader #1 for port 36489] security.NMTokenSecretManagerInNM: NMToken password retrieved successfully!! 2016-08-20 21:57:07,711 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,711 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: SASL server DIGEST-MD5 callback: setting password for client: appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,712 DEBUG [Socket Reader #1 for port 36489] security.SaslRpcServer: SASL server DIGEST-MD5 callback: setting canonicalized client ID: appattempt_1471710419543_0001_000001 2016-08-20 21:57:07,712 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Will send SUCCESS token of size 40 from saslServer. 2016-08-20 21:57:07,712 DEBUG [Socket Reader #1 for port 36489] ipc.Server: SASL server context established. Negotiated QoP is auth 2016-08-20 21:57:07,712 DEBUG [Socket Reader #1 for port 36489] ipc.Server: SASL server successfully authenticated client: appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,712 INFO [Socket Reader #1 for port 36489] ipc.Server: Auth successful for appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:07,713 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52634 Call#-33 Retry#-1 2016-08-20 21:57:07,713 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: responding to null from 127.0.0.1:52634 Call#-33 Retry#-1 Wrote 64 bytes. 2016-08-20 21:57:07,714 DEBUG [Thread-346] ipc.Client: Negotiated QOP is :auth 2016-08-20 21:57:07,714 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,714 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 1 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000001, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:07,714 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,714 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type STATUS_UPDATE 2016-08-20 21:57:07,725 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,725 DEBUG [IPC Parameter Sending Thread #0] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001 sending #15 2016-08-20 21:57:07,725 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:43931 clusterResources: 2016-08-20 21:57:07,727 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:43931 availableResource: 2016-08-20 21:57:07,727 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,727 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,727 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #-3 2016-08-20 21:57:07,728 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Successfully authorized userInfo { } protocol: "org.apache.hadoop.yarn.api.ContainerManagementProtocolPB" 2016-08-20 21:57:07,728 DEBUG [Socket Reader #1 for port 36489] ipc.Server: got #15 2016-08-20 21:57:07,729 DEBUG [IPC Server handler 5 on 36489] ipc.Server: IPC Server handler 5 on 36489: org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.getContainerStatuses from 127.0.0.1:52634 Call#15 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER 2016-08-20 21:57:07,727 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: starting, having connections 4 2016-08-20 21:57:07,741 INFO [ContainersLauncher #1] nodemanager.DefaultContainerExecutor: launchContainer: [nice, -n, 0, bash, /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/usercache/root/appcache/application_1471710419543_0001/container_1471710419543_0001_01_000003/default_container_executor.sh] 2016-08-20 21:57:07,734 DEBUG [IPC Server handler 5 on 36489] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:TOKEN) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419) 2016-08-20 21:57:07,759 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEvent.EventType: CONTAINER_LAUNCHED 2016-08-20 21:57:07,759 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000004 of type CONTAINER_LAUNCHED 2016-08-20 21:57:07,759 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000004 transitioned from LOCALIZED to RUNNING 2016-08-20 21:57:07,759 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerStartMonitoringEvent.EventType: START_MONITORING_CONTAINER 2016-08-20 21:57:07,769 INFO [IPC Server handler 5 on 36489] containermanager.ContainerManagerImpl: Getting container-status for container_1471710419543_0001_01_000004 2016-08-20 21:57:07,770 INFO [IPC Server handler 5 on 36489] containermanager.ContainerManagerImpl: Returning ContainerStatus: [ContainerId: container_1471710419543_0001_01_000004, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ] 2016-08-20 21:57:07,770 DEBUG [IPC Server handler 5 on 36489] ipc.Server: Served: getContainerStatuses queueTime= 15 procesingTime= 26 2016-08-20 21:57:07,771 DEBUG [IPC Server handler 5 on 36489] ipc.Server: IPC Server handler 5 on 36489: responding to org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.getContainerStatuses from 127.0.0.1:52634 Call#15 Retry#0 2016-08-20 21:57:07,771 DEBUG [IPC Server handler 5 on 36489] ipc.Server: IPC Server handler 5 on 36489: responding to org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.getContainerStatuses from 127.0.0.1:52634 Call#15 Retry#0 Wrote 77 bytes. 2016-08-20 21:57:07,772 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001 got value #15 2016-08-20 21:57:07,773 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: closed 2016-08-20 21:57:07,773 DEBUG [Socket Reader #1 for port 36489] ipc.Server: Socket Reader #1 for port 36489: disconnecting client 127.0.0.1:52634. Number of active connections: 0 2016-08-20 21:57:07,773 DEBUG [Thread-346] ipc.ProtobufRpcEngine: Call: getContainerStatuses took 83ms 2016-08-20 21:57:07,773 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:36489 from appattempt_1471710419543_0001_000001: stopped, remaining connections 3 2016-08-20 21:57:07,780 INFO [ContainersLauncher #2] nodemanager.DefaultContainerExecutor: launchContainer: [nice, -n, 0, bash, /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/usercache/root/appcache/application_1471710419543_0001/container_1471710419543_0001_01_000004/default_container_executor.sh] 2016-08-20 21:57:07,799 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,799 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 3 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000002, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000003, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000004, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:07,800 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,800 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:07,802 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,802 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:07,803 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:07,803 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,803 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,810 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,810 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:07,811 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,811 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:46239 of type STATUS_UPDATE 2016-08-20 21:57:07,811 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,811 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:46239 clusterResources: 2016-08-20 21:57:07,811 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:46239 availableResource: 2016-08-20 21:57:07,811 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,811 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,814 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,815 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 1 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000001, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:07,815 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,815 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type STATUS_UPDATE 2016-08-20 21:57:07,816 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,816 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:43931 clusterResources: 2016-08-20 21:57:07,816 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:43931 availableResource: 2016-08-20 21:57:07,817 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,817 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,900 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,900 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 3 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000002, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000003, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000004, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:07,901 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,901 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:07,903 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,904 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:07,904 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:07,904 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,904 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,911 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,911 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:07,911 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,911 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:46239 of type STATUS_UPDATE 2016-08-20 21:57:07,911 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,912 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:46239 clusterResources: 2016-08-20 21:57:07,912 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:46239 availableResource: 2016-08-20 21:57:07,912 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,912 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,915 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:07,916 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 1 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000001, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:07,916 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,916 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type STATUS_UPDATE 2016-08-20 21:57:07,917 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:07,917 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:43931 clusterResources: 2016-08-20 21:57:07,917 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:43931 availableResource: 2016-08-20 21:57:07,917 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:07,917 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:07,982 DEBUG [IPC Parameter Sending Thread #0] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:37347 from root sending #16 2016-08-20 21:57:07,982 DEBUG [Socket Reader #1 for port 37347] ipc.Server: got #16 2016-08-20 21:57:07,982 DEBUG [IPC Server handler 0 on 37347] ipc.Server: IPC Server handler 0 on 37347: org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.allocate from 127.0.0.1:46672 Call#16 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER 2016-08-20 21:57:07,983 DEBUG [IPC Server handler 0 on 37347] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:TOKEN) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419) 2016-08-20 21:57:07,984 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.event.RMAppAttemptStatusupdateEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:07,984 DEBUG [AsyncDispatcher event handler] attempt.RMAppAttemptImpl: Processing event for appattempt_1471710419543_0001_000001 of type STATUS_UPDATE 2016-08-20 21:57:07,986 DEBUG [IPC Server handler 0 on 37347] rmcontainer.RMContainerImpl: Processing container_1471710419543_0001_01_000004 of type RELEASED 2016-08-20 21:57:07,987 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeCleanContainerEvent.EventType: CLEANUP_CONTAINER 2016-08-20 21:57:07,987 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type CLEANUP_CONTAINER 2016-08-20 21:57:07,987 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.event.RMAppAttemptContainerFinishedEvent.EventType: CONTAINER_FINISHED 2016-08-20 21:57:07,987 INFO [IPC Server handler 0 on 37347] rmcontainer.RMContainerImpl: container_1471710419543_0001_01_000004 Container Transitioned from RUNNING to RELEASED 2016-08-20 21:57:07,987 DEBUG [AsyncDispatcher event handler] attempt.RMAppAttemptImpl: Processing event for appattempt_1471710419543_0001_000001 of type CONTAINER_FINISHED 2016-08-20 21:57:07,987 INFO [IPC Server handler 0 on 37347] resourcemanager.RMAuditLogger: USER=root IP=127.0.0.1 OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1471710419543_0001 CONTAINERID=container_1471710419543_0001_01_000004 RESOURCE= 2016-08-20 21:57:07,989 DEBUG [IPC Server handler 0 on 37347] scheduler.SchedulerNode: Released container container_1471710419543_0001_01_000004 of capacity on host localhost:36489, which currently has 2 containers, used and available, release resources=true 2016-08-20 21:57:07,991 DEBUG [IPC Server handler 0 on 37347] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.25 Partition: 2016-08-20 21:57:07,991 DEBUG [IPC Server handler 0 on 37347] capacity.LeafQueue: default used= numContainers=3 user=root user-resources= 2016-08-20 21:57:07,991 DEBUG [IPC Server handler 0 on 37347] capacity.ParentQueue: completedContainer root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=usedCapacity=0.25, numApps=1, numContainers=3, cluster= 2016-08-20 21:57:07,992 DEBUG [IPC Server handler 0 on 37347] capacity.ParentQueue: Re-sorting completed queue: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=3 2016-08-20 21:57:07,994 WARN [IPC Server handler 0 on 37347] scheduler.AbstractYarnScheduler: Error happens when checking increase request, Ignoring.. exception= org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Failed to get rmContainer for increase request, with container-id=container_1471710419543_0001_01_000004 at org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.createSchedContainerChangeRequest(AbstractYarnScheduler.java:768) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.createSchedContainerChangeRequests(AbstractYarnScheduler.java:785) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updateIncreaseRequests(CapacityScheduler.java:934) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocate(CapacityScheduler.java:968) at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:525) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60) at org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:663) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419) 2016-08-20 21:57:08,000 DEBUG [IPC Server handler 0 on 37347] scheduler.AppSchedulingInfo: Added increase request:container_1471710419543_0001_01_000003 delta= 2016-08-20 21:57:08,002 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,003 DEBUG [IPC Server handler 0 on 37347] scheduler.AbstractYarnScheduler: Processing decrease request:, node=localhost:36489> 2016-08-20 21:57:08,003 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 3 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000002, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000003, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000004, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:08,005 DEBUG [IPC Server handler 0 on 37347] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.20833333 Partition: 2016-08-20 21:57:08,005 DEBUG [IPC Server handler 0 on 37347] capacity.LeafQueue: default used= numContainers=3 user=root user-resources= 2016-08-20 21:57:08,005 DEBUG [IPC Server handler 0 on 37347] scheduler.AppSchedulingInfo: Decrease container : applicationId=application_1471710419543_0001 container=container_1471710419543_0001_01_000002 host=localhost:36489 user=root resource= 2016-08-20 21:57:08,006 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,007 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:08,006 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.CMgrCompletedContainersEvent.EventType: FINISH_CONTAINERS 2016-08-20 21:57:08,012 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,012 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,012 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:08,012 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:08,012 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:08,012 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,012 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,012 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Trying to assign containers to child-queue of root 2016-08-20 21:57:08,012 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:46239 of type STATUS_UPDATE 2016-08-20 21:57:08,013 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,014 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerKillEvent.EventType: KILL_CONTAINER 2016-08-20 21:57:08,014 DEBUG [IPC Server handler 0 on 37347] rmcontainer.RMContainerImpl: Processing container_1471710419543_0001_01_000002 of type CHANGE_RESOURCE 2016-08-20 21:57:08,014 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000004 of type KILL_CONTAINER 2016-08-20 21:57:08,014 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000004 transitioned from RUNNING to KILLING 2016-08-20 21:57:08,014 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEvent.EventType: CLEANUP_CONTAINER 2016-08-20 21:57:08,015 INFO [AsyncDispatcher event handler] launcher.ContainerLaunch: Cleaning up container container_1471710419543_0001_01_000004 2016-08-20 21:57:08,015 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Marking container container_1471710419543_0001_01_000004 as inactive 2016-08-20 21:57:08,015 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Getting pid for container container_1471710419543_0001_01_000004 to kill from pid file /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000004/container_1471710419543_0001_01_000004.pid 2016-08-20 21:57:08,015 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Accessing pid for container container_1471710419543_0001_01_000004 from pid file /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000004/container_1471710419543_0001_01_000004.pid 2016-08-20 21:57:08,016 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,017 DEBUG [AsyncDispatcher event handler] util.ProcessIdFileReader: Accessing pid from pid file /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000004/container_1471710419543_0001_01_000004.pid 2016-08-20 21:57:08,017 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 1 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000001, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:08,017 DEBUG [AsyncDispatcher event handler] util.ProcessIdFileReader: Got pid 3843 from path /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000004/container_1471710419543_0001_01_000004.pid 2016-08-20 21:57:08,017 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Got pid 3843 for container container_1471710419543_0001_01_000004 2016-08-20 21:57:08,017 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Sending signal to pid 3843 as user root for container container_1471710419543_0001_01_000004 2016-08-20 21:57:08,017 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,017 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type STATUS_UPDATE 2016-08-20 21:57:08,019 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,025 DEBUG [AsyncDispatcher event handler] nodemanager.DefaultContainerExecutor: Sending signal 15 to pid 3843 as user root 2016-08-20 21:57:08,026 DEBUG [IPC Server handler 0 on 37347] scheduler.SchedulerNode: Decreased container container_1471710419543_0001_01_000002 of capacity on host localhost:36489, which has 2 containers, used and available after allocation 2016-08-20 21:57:08,026 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: printChildQueues - queue: root child-queues: root.defaultusedCapacity=(0.20833333), label=(*) 2016-08-20 21:57:08,026 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Trying to assign to queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.20833333, absoluteUsedCapacity=0.20833333, numApps=1, numContainers=3 2016-08-20 21:57:08,026 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: assignContainers: node=localhost #applications=1 2016-08-20 21:57:08,027 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.20833333 Partition: 2016-08-20 21:57:08,027 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: Headroom calculation for user root: userLimit= queueMaxAvailRes= consumed= headroom= 2016-08-20 21:57:08,027 DEBUG [SchedulerEventDispatcher:Event Processor] fica.FiCaSchedulerApp: pre-assignContainers for application application_1471710419543_0001 2016-08-20 21:57:08,027 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.20833333 Partition: 2016-08-20 21:57:08,027 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=2560 2016-08-20 21:57:08,027 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 0, Capability: , # Containers: 0, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:08,027 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.20833333 Partition: 2016-08-20 21:57:08,027 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=2560 2016-08-20 21:57:08,027 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 0, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:08,028 DEBUG [SchedulerEventDispatcher:Event Processor] allocator.IncreaseContainerAllocator: Looking at increase request for application=appattempt_1471710419543_0001_000001 priority=0 2016-08-20 21:57:08,028 DEBUG [SchedulerEventDispatcher:Event Processor] allocator.IncreaseContainerAllocator: There's no increase request for appattempt_1471710419543_0001_000001 priority=0 2016-08-20 21:57:08,028 DEBUG [SchedulerEventDispatcher:Event Processor] allocator.IncreaseContainerAllocator: Looking at increase request for application=appattempt_1471710419543_0001_000001 priority=1 2016-08-20 21:57:08,028 DEBUG [SchedulerEventDispatcher:Event Processor] allocator.IncreaseContainerAllocator: Looking at increase request=, node=localhost:36489> 2016-08-20 21:57:08,033 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerNode: Increased container container_1471710419543_0001_01_000003 of capacity on host localhost:36489, which has 2 containers, used and available after allocation 2016-08-20 21:57:08,033 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.AppSchedulingInfo: allocated increase request : applicationId=application_1471710419543_0001 container=container_1471710419543_0001_01_000003 host=localhost:36489 user=root resource= 2016-08-20 21:57:08,033 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.AppSchedulingInfo: remove increase request:, node=localhost:36489> 2016-08-20 21:57:08,033 DEBUG [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: Processing container_1471710419543_0001_01_000003 of type CHANGE_RESOURCE 2016-08-20 21:57:08,034 INFO [SchedulerEventDispatcher:Event Processor] allocator.IncreaseContainerAllocator: Approved increase container request:, node=localhost:36489> fromReservation=false 2016-08-20 21:57:08,034 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: post-assignContainers for application application_1471710419543_0001 2016-08-20 21:57:08,034 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.20833333 Partition: 2016-08-20 21:57:08,034 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=3584 2016-08-20 21:57:08,034 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 0, Capability: , # Containers: 0, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:08,037 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.20833333 Partition: 2016-08-20 21:57:08,037 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 headRoom= currentConsumption=3584 2016-08-20 21:57:08,037 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerApplicationAttempt: showRequests: application=application_1471710419543_0001 request={AllocationRequestId: 0, Priority: 1, Capability: , # Containers: 0, Location: *, Relax Locality: true, Execution Type Request: {Execution Type: GUARANTEED, Enforce Execution Type: false}, Node Label Expression: } 2016-08-20 21:57:08,038 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.29166666 Partition: 2016-08-20 21:57:08,038 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.29166666 Partition: 2016-08-20 21:57:08,038 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.29166666 Partition: 2016-08-20 21:57:08,038 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: default user=root used= numContainers=3 headroom = user-resources= 2016-08-20 21:57:08,038 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Assigned to queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.29166666, absoluteUsedCapacity=0.29166666, numApps=1, numContainers=3 --> , NODE_LOCAL 2016-08-20 21:57:08,038 INFO [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.29166666, absoluteUsedCapacity=0.29166666, numApps=1, numContainers=3 2016-08-20 21:57:08,039 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: printChildQueues - queue: root child-queues: root.defaultusedCapacity=(0.29166666), label=(*) 2016-08-20 21:57:08,039 INFO [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.33333334 absoluteUsedCapacity=0.33333334 used= cluster= 2016-08-20 21:57:08,039 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: ParentQ=root assignedSoFarInThisIteration= usedCapacity=0.33333334 absoluteUsedCapacity=0.33333334 2016-08-20 21:57:08,039 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Trying to assign containers to child-queue of root 2016-08-20 21:57:08,040 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: printChildQueues - queue: root child-queues: root.defaultusedCapacity=(0.29166666), label=(*) 2016-08-20 21:57:08,040 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Trying to assign to queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.29166666, absoluteUsedCapacity=0.29166666, numApps=1, numContainers=3 2016-08-20 21:57:08,040 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: assignContainers: node=localhost #applications=1 2016-08-20 21:57:08,040 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: Skip this queue=root.default, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,041 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Assigned to queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.29166666, absoluteUsedCapacity=0.29166666, numApps=1, numContainers=3 --> , NODE_LOCAL 2016-08-20 21:57:08,041 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:46239 clusterResources: 2016-08-20 21:57:08,041 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:46239 availableResource: 2016-08-20 21:57:08,041 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,041 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,041 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:43931 clusterResources: 2016-08-20 21:57:08,041 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:43931 availableResource: 2016-08-20 21:57:08,041 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,041 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,041 DEBUG [IPC Server handler 0 on 37347] capacity.ParentQueue: completedContainer root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=usedCapacity=0.29166666, numApps=1, numContainers=2, cluster= 2016-08-20 21:57:08,050 INFO [IPC Server handler 0 on 37347] capacity.LeafQueue: Application attempt appattempt_1471710419543_0001_000001 decreased container:container_1471710419543_0001_01_000002 from to 2016-08-20 21:57:08,052 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeDecreaseContainerEvent.EventType: DECREASE_CONTAINER 2016-08-20 21:57:08,052 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type DECREASE_CONTAINER 2016-08-20 21:57:08,053 DEBUG [IPC Server handler 0 on 37347] security.BaseContainerTokenSecretManager: Creating password for container_1471710419543_0001_01_000003 for user container_1471710419543_0001_01_000003 (auth:SIMPLE) to be run on NM localhost:36489 2016-08-20 21:57:08,055 DEBUG [IPC Server handler 0 on 37347] security.ContainerTokenIdentifier: Writing ContainerTokenIdentifier to RPC layer: containerId { app_attempt_id { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } id: 3 } nmHostAddr: "localhost:36489" appSubmitter: "root" resource { memory: 2048 virtual_cores: 1 } expiryTimeStamp: 1471711028053 masterKeyId: -991580041 rmIdentifier: 1471710419543 priority { priority: 1 } creationTime: 1471710427234 nodeLabelExpression: "" containerType: TASK executionType: GUARANTEED 2016-08-20 21:57:08,060 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Sent signal SIGTERM to pid 3843 as user root for container container_1471710419543_0001_01_000004, result=success 2016-08-20 21:57:08,061 DEBUG [IPC Server handler 0 on 37347] security.ContainerTokenIdentifier: Writing ContainerTokenIdentifier to RPC layer: containerId { app_attempt_id { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } id: 3 } nmHostAddr: "localhost:36489" appSubmitter: "root" resource { memory: 2048 virtual_cores: 1 } expiryTimeStamp: 1471711028053 masterKeyId: -991580041 rmIdentifier: 1471710419543 priority { priority: 1 } creationTime: 1471710427234 nodeLabelExpression: "" containerType: TASK executionType: GUARANTEED 2016-08-20 21:57:08,068 DEBUG [IPC Server handler 0 on 37347] rmcontainer.RMContainerImpl: Processing container_1471710419543_0001_01_000003 of type ACQUIRE_UPDATED_CONTAINER 2016-08-20 21:57:08,068 DEBUG [IPC Server handler 0 on 37347] security.BaseContainerTokenSecretManager: Creating password for container_1471710419543_0001_01_000002 for user container_1471710419543_0001_01_000002 (auth:SIMPLE) to be run on NM localhost:36489 2016-08-20 21:57:08,069 DEBUG [IPC Server handler 0 on 37347] security.ContainerTokenIdentifier: Writing ContainerTokenIdentifier to RPC layer: containerId { app_attempt_id { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } id: 2 } nmHostAddr: "localhost:36489" appSubmitter: "root" resource { memory: 512 virtual_cores: 1 } expiryTimeStamp: 1471711028068 masterKeyId: -991580041 rmIdentifier: 1471710419543 priority { priority: 1 } creationTime: 1471710427212 nodeLabelExpression: "" containerType: TASK executionType: GUARANTEED 2016-08-20 21:57:08,062 DEBUG [AsyncDispatcher event handler] security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:327) 2016-08-20 21:57:08,101 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor: Exit code from container container_1471710419543_0001_01_000004 is : 143 2016-08-20 21:57:08,106 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,106 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 3 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000002, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000003, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000004, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: Container Killed by ResourceManager , ExitStatus: -106, ]] 2016-08-20 21:57:08,106 DEBUG [ContainersLauncher #2] container.ContainerImpl: Processing container_1471710419543_0001_01_000004 of type UPDATE_DIAGNOSTICS_MSG 2016-08-20 21:57:08,108 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,110 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:08,107 DEBUG [ContainersLauncher #2] launcher.ContainerLaunch: Container container_1471710419543_0001_01_000004 completed with exit code 143 2016-08-20 21:57:08,108 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.CMgrDecreaseContainersResourceEvent.EventType: DECREASE_CONTAINERS_RESOURCE 2016-08-20 21:57:08,112 DEBUG [ContainersLauncher #2] concurrent.ExecutorHelper: afterExecute in thread: ContainersLauncher #2, runnable type: java.util.concurrent.FutureTask 2016-08-20 21:57:08,113 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,113 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:08,114 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerExitEvent.EventType: CONTAINER_KILLED_ON_REQUEST 2016-08-20 21:57:08,119 DEBUG [IPC Server handler 0 on 37347] security.ContainerTokenIdentifier: Writing ContainerTokenIdentifier to RPC layer: containerId { app_attempt_id { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } id: 2 } nmHostAddr: "localhost:36489" appSubmitter: "root" resource { memory: 512 virtual_cores: 1 } expiryTimeStamp: 1471711028068 masterKeyId: -991580041 rmIdentifier: 1471710419543 priority { priority: 1 } creationTime: 1471710427212 nodeLabelExpression: "" containerType: TASK executionType: GUARANTEED 2016-08-20 21:57:08,117 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,116 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,120 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:46239 of type STATUS_UPDATE 2016-08-20 21:57:08,121 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 1 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000001, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:08,121 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000004 of type CONTAINER_KILLED_ON_REQUEST 2016-08-20 21:57:08,121 DEBUG [IPC Server handler 0 on 37347] rmcontainer.RMContainerImpl: Processing container_1471710419543_0001_01_000002 of type ACQUIRE_UPDATED_CONTAINER 2016-08-20 21:57:08,121 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,121 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,121 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:08,121 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:08,121 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,121 DEBUG [IPC Server handler 0 on 37347] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.29166666 Partition: 2016-08-20 21:57:08,121 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,122 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type STATUS_UPDATE 2016-08-20 21:57:08,121 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,130 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:46239 clusterResources: 2016-08-20 21:57:08,130 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000004 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL 2016-08-20 21:57:08,131 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.ContainerLocalizationCleanupEvent.EventType: CLEANUP_CONTAINER_RESOURCES 2016-08-20 21:57:08,125 DEBUG [IPC Server handler 0 on 37347] ipc.Server: Served: allocate queueTime= 1 procesingTime= 142 2016-08-20 21:57:08,123 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,130 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:46239 availableResource: 2016-08-20 21:57:08,131 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,131 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,131 DEBUG [IPC Server handler 0 on 37347] ipc.Server: IPC Server handler 0 on 37347: responding to org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.allocate from 127.0.0.1:46672 Call#16 Retry#0 2016-08-20 21:57:08,131 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:43931 clusterResources: 2016-08-20 21:57:08,131 DEBUG [IPC Server handler 0 on 37347] ipc.Server: IPC Server handler 0 on 37347: responding to org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.allocate from 127.0.0.1:46672 Call#16 Retry#0 Wrote 561 bytes. 2016-08-20 21:57:08,131 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:43931 availableResource: 2016-08-20 21:57:08,132 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,132 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,132 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:37347 from root] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:37347 from root got value #16 2016-08-20 21:57:08,132 DEBUG [Thread-346] ipc.ProtobufRpcEngine: Call: allocate took 151ms 2016-08-20 21:57:08,133 DEBUG [Thread-346] impl.AMRMClientImpl: RM has confirmed changed resource allocation for container container_1471710419543_0001_01_000003. Current resource allocation:. Remove pending change request: 2016-08-20 21:57:08,133 DEBUG [Thread-346] impl.AMRMClientImpl: RM has confirmed changed resource allocation for container container_1471710419543_0001_01_000002. Current resource allocation:. Remove pending change request: 2016-08-20 21:57:08,133 DEBUG [Thread-346] service.AbstractService: Service: org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl entered state STOPPED 2016-08-20 21:57:08,134 DEBUG [Thread-346] ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@21109725 2016-08-20 21:57:08,135 DEBUG [DeletionService #0] concurrent.HadoopScheduledThreadPoolExecutor: beforeExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,135 DEBUG [DeletionService #0] nodemanager.DeletionService: FileDeletionTask : user : root subDir : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/usercache/root/appcache/application_1471710419543_0001/container_1471710419543_0001_01_000004 baseDir : null 2016-08-20 21:57:08,136 DEBUG [DeletionService #0] nodemanager.DeletionService: Deleting path: [/opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/usercache/root/appcache/application_1471710419543_0001/container_1471710419543_0001_01_000004] as user: [root] 2016-08-20 21:57:08,138 INFO [DeletionService #0] nodemanager.DefaultContainerExecutor: Deleting absolute path : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/usercache/root/appcache/application_1471710419543_0001/container_1471710419543_0001_01_000004 2016-08-20 21:57:08,139 DEBUG [IPC Parameter Sending Thread #0] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:44193 from root sending #17 2016-08-20 21:57:08,139 DEBUG [Socket Reader #1 for port 44193] ipc.Server: got #17 2016-08-20 21:57:08,139 DEBUG [DeletionService #0] concurrent.ExecutorHelper: afterExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,140 DEBUG [IPC Server handler 3 on 44193] ipc.Server: IPC Server handler 3 on 44193: org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.forceKillApplication from 127.0.0.1:58714 Call#17 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER 2016-08-20 21:57:08,140 DEBUG [DeletionService #0] concurrent.HadoopScheduledThreadPoolExecutor: beforeExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,140 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEvent.EventType: CONTAINER_RESOURCES_CLEANEDUP 2016-08-20 21:57:08,140 DEBUG [IPC Server handler 3 on 44193] security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419) 2016-08-20 21:57:08,140 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000004 of type CONTAINER_RESOURCES_CLEANEDUP 2016-08-20 21:57:08,140 DEBUG [DeletionService #0] nodemanager.DeletionService: FileDeletionTask : user : null subDir : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000004 baseDir : null 2016-08-20 21:57:08,141 INFO [AsyncDispatcher event handler] nodemanager.NMAuditLogger: USER=root OPERATION=Container Finished - Killed TARGET=ContainerImpl RESULT=SUCCESS APPID=application_1471710419543_0001 CONTAINERID=container_1471710419543_0001_01_000004 2016-08-20 21:57:08,141 DEBUG [DeletionService #0] nodemanager.DeletionService: NM deleting absolute path : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000004 2016-08-20 21:57:08,141 DEBUG [IPC Server handler 3 on 44193] security.ApplicationACLsManager: Verifying access-type MODIFY_APP for root (auth:SIMPLE) on application application_1471710419543_0001 owned by root 2016-08-20 21:57:08,145 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,145 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000004 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE 2016-08-20 21:57:08,147 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 3 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000002, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000003, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000004, ExecutionType: GUARANTEED, State: COMPLETE, Capability: , Diagnostics: Container Killed by ResourceManager Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143. , ExitStatus: -106, ]] 2016-08-20 21:57:08,147 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,147 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:08,148 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationContainerFinishedEvent.EventType: APPLICATION_CONTAINER_FINISHED 2016-08-20 21:57:08,149 DEBUG [AsyncDispatcher event handler] application.ApplicationImpl: Processing application_1471710419543_0001 of type APPLICATION_CONTAINER_FINISHED 2016-08-20 21:57:08,149 INFO [AsyncDispatcher event handler] application.ApplicationImpl: Removing container_1471710419543_0001_01_000004 from application application_1471710419543_0001 2016-08-20 21:57:08,150 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerStopMonitoringEvent.EventType: STOP_MONITORING_CONTAINER 2016-08-20 21:57:08,150 DEBUG [DeletionService #0] concurrent.ExecutorHelper: afterExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,151 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.event.LogHandlerContainerFinishedEvent.EventType: CONTAINER_FINISHED 2016-08-20 21:57:08,152 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEvent.EventType: CONTAINER_STOP 2016-08-20 21:57:08,152 INFO [AsyncDispatcher event handler] containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1471710419543_0001 2016-08-20 21:57:08,155 DEBUG [IPC Server handler 3 on 44193] ipc.Server: Served: forceKillApplication queueTime= 2 procesingTime= 14 2016-08-20 21:57:08,155 DEBUG [IPC Server handler 3 on 44193] ipc.Server: IPC Server handler 3 on 44193: responding to org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.forceKillApplication from 127.0.0.1:58714 Call#17 Retry#0 2016-08-20 21:57:08,156 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:44193 from root] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:44193 from root got value #17 2016-08-20 21:57:08,156 DEBUG [main] ipc.ProtobufRpcEngine: Call: forceKillApplication took 18ms 2016-08-20 21:57:08,157 DEBUG [IPC Server handler 3 on 44193] ipc.Server: IPC Server handler 3 on 44193: responding to org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.forceKillApplication from 127.0.0.1:58714 Call#17 Retry#0 Wrote 34 bytes. 2016-08-20 21:57:08,158 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppKillByClientEvent.EventType: KILL 2016-08-20 21:57:08,158 DEBUG [AsyncDispatcher event handler] rmapp.RMAppImpl: Processing event for application_1471710419543_0001 of type KILL 2016-08-20 21:57:08,159 INFO [AsyncDispatcher event handler] resourcemanager.RMAuditLogger: USER=root IP=127.0.0.1 OPERATION=Kill Application Request TARGET=RMAppImpl RESULT=SUCCESS APPID=application_1471710419543_0001 2016-08-20 21:57:08,159 INFO [AsyncDispatcher event handler] rmapp.RMAppImpl: application_1471710419543_0001 State change from RUNNING to KILLING on event=KILL 2016-08-20 21:57:08,159 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,159 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEvent.EventType: KILL 2016-08-20 21:57:08,159 DEBUG [AsyncDispatcher event handler] attempt.RMAppAttemptImpl: Processing event for appattempt_1471710419543_0001_000001 of type KILL 2016-08-20 21:57:08,160 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:08,160 INFO [SchedulerEventDispatcher:Event Processor] scheduler.AbstractYarnScheduler: Container container_1471710419543_0001_01_000004 completed with event FINISHED, but corresponding RMContainer doesn't exist. 2016-08-20 21:57:08,160 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:08,160 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,160 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,161 INFO [AsyncDispatcher event handler] attempt.RMAppAttemptImpl: Updating application attempt appattempt_1471710419543_0001_000001 with final state: KILLED, and exit status: -1000 2016-08-20 21:57:08,162 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateUpdateAppAttemptEvent.EventType: UPDATE_APP_ATTEMPT 2016-08-20 21:57:08,162 DEBUG [AsyncDispatcher event handler] recovery.RMStateStore: Processing event of type UPDATE_APP_ATTEMPT 2016-08-20 21:57:08,163 DEBUG [AsyncDispatcher event handler] recovery.RMStateStore: Updating info for attempt: appattempt_1471710419543_0001_000001 2016-08-20 21:57:08,163 INFO [AsyncDispatcher event handler] attempt.RMAppAttemptImpl: appattempt_1471710419543_0001_000001 State change from RUNNING to FINAL_SAVING 2016-08-20 21:57:08,163 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEvent.EventType: ATTEMPT_UPDATE_SAVED 2016-08-20 21:57:08,163 DEBUG [AsyncDispatcher event handler] attempt.RMAppAttemptImpl: Processing event for appattempt_1471710419543_0001_000001 of type ATTEMPT_UPDATE_SAVED 2016-08-20 21:57:08,163 INFO [AsyncDispatcher event handler] resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1471710419543_0001_000001 2016-08-20 21:57:08,164 INFO [AsyncDispatcher event handler] security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1471710419543_0001_000001 2016-08-20 21:57:08,165 INFO [AsyncDispatcher event handler] attempt.RMAppAttemptImpl: appattempt_1471710419543_0001_000001 State change from FINAL_SAVING to KILLED 2016-08-20 21:57:08,165 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppFailedAttemptEvent.EventType: ATTEMPT_KILLED 2016-08-20 21:57:08,165 DEBUG [AsyncDispatcher event handler] rmapp.RMAppImpl: Processing event for application_1471710419543_0001 of type ATTEMPT_KILLED 2016-08-20 21:57:08,165 INFO [AsyncDispatcher event handler] rmapp.RMAppImpl: Updating application application_1471710419543_0001 with final state: KILLED 2016-08-20 21:57:08,166 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateUpdateAppEvent.EventType: UPDATE_APP 2016-08-20 21:57:08,166 DEBUG [AsyncDispatcher event handler] recovery.RMStateStore: Processing event of type UPDATE_APP 2016-08-20 21:57:08,166 INFO [AsyncDispatcher event handler] recovery.RMStateStore: Updating info for app: application_1471710419543_0001 2016-08-20 21:57:08,167 INFO [AsyncDispatcher event handler] rmapp.RMAppImpl: application_1471710419543_0001 State change from KILLING to FINAL_SAVING on event=ATTEMPT_KILLED 2016-08-20 21:57:08,167 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.AppAttemptRemovedSchedulerEvent.EventType: APP_ATTEMPT_REMOVED 2016-08-20 21:57:08,167 INFO [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Application Attempt appattempt_1471710419543_0001_000001 is done. finalState=KILLED 2016-08-20 21:57:08,167 DEBUG [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: Processing container_1471710419543_0001_01_000001 of type KILL 2016-08-20 21:57:08,167 INFO [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: container_1471710419543_0001_01_000001 Container Transitioned from RUNNING to KILLED 2016-08-20 21:57:08,167 INFO [SchedulerEventDispatcher:Event Processor] resourcemanager.RMAuditLogger: USER=root OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1471710419543_0001 CONTAINERID=container_1471710419543_0001_01_000001 RESOURCE= 2016-08-20 21:57:08,168 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerNode: Released container container_1471710419543_0001_01_000001 of capacity on host localhost:43931, which currently has 0 containers, used and available, release resources=true 2016-08-20 21:57:08,168 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.20833333 Partition: 2016-08-20 21:57:08,168 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: default used= numContainers=2 user=root user-resources= 2016-08-20 21:57:08,168 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: completedContainer root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=usedCapacity=0.20833333, numApps=1, numContainers=1, cluster= 2016-08-20 21:57:08,168 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Re-sorting completed queue: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.20833333, absoluteUsedCapacity=0.20833333, numApps=1, numContainers=2 2016-08-20 21:57:08,169 DEBUG [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: Processing container_1471710419543_0001_01_000002 of type KILL 2016-08-20 21:57:08,169 INFO [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: container_1471710419543_0001_01_000002 Container Transitioned from RUNNING to KILLED 2016-08-20 21:57:08,169 INFO [SchedulerEventDispatcher:Event Processor] resourcemanager.RMAuditLogger: USER=root OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1471710419543_0001 CONTAINERID=container_1471710419543_0001_01_000002 RESOURCE= 2016-08-20 21:57:08,169 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerNode: Released container container_1471710419543_0001_01_000002 of capacity on host localhost:36489, which currently has 1 containers, used and available, release resources=true 2016-08-20 21:57:08,169 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.16666667 Partition: 2016-08-20 21:57:08,169 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: default used= numContainers=1 user=root user-resources= 2016-08-20 21:57:08,169 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: completedContainer root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=usedCapacity=0.16666667, numApps=1, numContainers=0, cluster= 2016-08-20 21:57:08,169 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Re-sorting completed queue: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.16666667, absoluteUsedCapacity=0.16666667, numApps=1, numContainers=1 2016-08-20 21:57:08,170 DEBUG [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: Processing container_1471710419543_0001_01_000003 of type KILL 2016-08-20 21:57:08,170 INFO [SchedulerEventDispatcher:Event Processor] rmcontainer.RMContainerImpl: container_1471710419543_0001_01_000003 Container Transitioned from RUNNING to KILLED 2016-08-20 21:57:08,170 INFO [SchedulerEventDispatcher:Event Processor] resourcemanager.RMAuditLogger: USER=root OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1471710419543_0001 CONTAINERID=container_1471710419543_0001_01_000003 RESOURCE= 2016-08-20 21:57:08,170 DEBUG [SchedulerEventDispatcher:Event Processor] scheduler.SchedulerNode: Released container container_1471710419543_0001_01_000003 of capacity on host localhost:36489, which currently has 0 containers, used and available, release resources=true 2016-08-20 21:57:08,170 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: User limit computation for root in queue default userLimitPercent=100 userLimitFactor=1.0 required: consumed: user-limit-resource: queueCapacity: qconsumed: consumedRatio: 0.0 currentCapacity: activeUsers: 0 clusterCapacity: resourceByLabel: usageratio: 0.0 Partition: 2016-08-20 21:57:08,170 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: default used= numContainers=0 user=root user-resources= 2016-08-20 21:57:08,170 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: completedContainer root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=usedCapacity=0.0, numApps=1, numContainers=-1, cluster= 2016-08-20 21:57:08,171 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Re-sorting completed queue: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 2016-08-20 21:57:08,171 INFO [SchedulerEventDispatcher:Event Processor] scheduler.AppSchedulingInfo: Application application_1471710419543_0001 requests cleared 2016-08-20 21:57:08,171 INFO [SchedulerEventDispatcher:Event Processor] capacity.LeafQueue: Application removed - appId: application_1471710419543_0001 user: root queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0 2016-08-20 21:57:08,172 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncherEvent.EventType: CLEANUP 2016-08-20 21:57:08,174 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppEvent.EventType: APP_UPDATE_SAVED 2016-08-20 21:57:08,174 DEBUG [AsyncDispatcher event handler] rmapp.RMAppImpl: Processing event for application_1471710419543_0001 of type APP_UPDATE_SAVED 2016-08-20 21:57:08,180 INFO [ApplicationMasterLauncher #1] amlauncher.AMLauncher: Cleaning master appattempt_1471710419543_0001_000001 2016-08-20 21:57:08,180 DEBUG [ApplicationMasterLauncher #1] ipc.YarnRPC: Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC 2016-08-20 21:57:08,181 DEBUG [ApplicationMasterLauncher #1] security.BaseNMTokenSecretManager: creating password for appattempt_1471710419543_0001_000001 for user root to run on NM localhost:43931 2016-08-20 21:57:08,181 DEBUG [ApplicationMasterLauncher #1] security.NMTokenIdentifier: Writing NMTokenIdentifier to RPC layer: appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 43931 } appSubmitter: "root" keyId: 304809570 2016-08-20 21:57:08,182 INFO [AsyncDispatcher event handler] rmapp.RMAppImpl: application_1471710419543_0001 State change from FINAL_SAVING to KILLED on event=APP_UPDATE_SAVED 2016-08-20 21:57:08,182 DEBUG [ApplicationMasterLauncher #1] security.NMTokenIdentifier: Writing NMTokenIdentifier to RPC layer: appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 43931 } appSubmitter: "root" keyId: 304809570 2016-08-20 21:57:08,182 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeCleanContainerEvent.EventType: CLEANUP_CONTAINER 2016-08-20 21:57:08,182 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type CLEANUP_CONTAINER 2016-08-20 21:57:08,183 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.event.RMAppAttemptContainerFinishedEvent.EventType: CONTAINER_FINISHED 2016-08-20 21:57:08,183 DEBUG [ApplicationMasterLauncher #1] security.SecurityUtil: Acquired token Kind: NMToken, Service: 127.0.0.1:43931, Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 43931 } appSubmitter: "root" keyId: 304809570) 2016-08-20 21:57:08,183 DEBUG [AsyncDispatcher event handler] attempt.RMAppAttemptImpl: Processing event for appattempt_1471710419543_0001_000001 of type CONTAINER_FINISHED 2016-08-20 21:57:08,184 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeCleanContainerEvent.EventType: CLEANUP_CONTAINER 2016-08-20 21:57:08,184 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type CLEANUP_CONTAINER 2016-08-20 21:57:08,184 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.event.RMAppAttemptContainerFinishedEvent.EventType: CONTAINER_FINISHED 2016-08-20 21:57:08,185 DEBUG [AsyncDispatcher event handler] attempt.RMAppAttemptImpl: Processing event for appattempt_1471710419543_0001_000001 of type CONTAINER_FINISHED 2016-08-20 21:57:08,185 DEBUG [ApplicationMasterLauncher #1] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:SIMPLE) from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(ServerProxy.java:94) 2016-08-20 21:57:08,185 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeCleanContainerEvent.EventType: CLEANUP_CONTAINER 2016-08-20 21:57:08,185 DEBUG [ApplicationMasterLauncher #1] ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ContainerManagementProtocol 2016-08-20 21:57:08,185 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type CLEANUP_CONTAINER 2016-08-20 21:57:08,185 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.event.RMAppAttemptContainerFinishedEvent.EventType: CONTAINER_FINISHED 2016-08-20 21:57:08,185 DEBUG [AsyncDispatcher event handler] attempt.RMAppAttemptImpl: Processing event for appattempt_1471710419543_0001_000001 of type CONTAINER_FINISHED 2016-08-20 21:57:08,185 DEBUG [ApplicationMasterLauncher #1] ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@21109725 2016-08-20 21:57:08,186 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeCleanAppEvent.EventType: CLEANUP_APP 2016-08-20 21:57:08,186 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type CLEANUP_APP 2016-08-20 21:57:08,188 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeCleanAppEvent.EventType: CLEANUP_APP 2016-08-20 21:57:08,188 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type CLEANUP_APP 2016-08-20 21:57:08,189 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.AppRemovedSchedulerEvent.EventType: APP_REMOVED 2016-08-20 21:57:08,189 INFO [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Application removed - appId: application_1471710419543_0001 user: root leaf-queue of parent: root #applications: 0 2016-08-20 21:57:08,190 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.RMAppManagerEvent.EventType: APP_COMPLETED 2016-08-20 21:57:08,190 DEBUG [AsyncDispatcher event handler] resourcemanager.RMAppManager: RMAppManager processing event for application_1471710419543_0001 of type APP_COMPLETED 2016-08-20 21:57:08,190 DEBUG [ApplicationMasterLauncher #1] ipc.Client: The ping interval is 60000 ms. 2016-08-20 21:57:08,191 DEBUG [ApplicationMasterLauncher #1] ipc.Client: Connecting to localhost/127.0.0.1:43931 2016-08-20 21:57:08,191 DEBUG [IPC Server listener on 43931] ipc.Server: Server connection from 127.0.0.1:43416; # active connections: 2; # queued calls: 0 2016-08-20 21:57:08,192 INFO [AsyncDispatcher event handler] resourcemanager.RMAuditLogger: USER=root OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1471710419543_0001 2016-08-20 21:57:08,192 DEBUG [ApplicationMasterLauncher #1] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788) 2016-08-20 21:57:08,193 DEBUG [ApplicationMasterLauncher #1] security.SaslRpcClient: Sending sasl message state: NEGOTIATE 2016-08-20 21:57:08,197 DEBUG [AsyncDispatcher event handler] scheduler.AbstractYarnScheduler: Request for appInfo of unknown attempt appattempt_1471710419543_0001_000001 2016-08-20 21:57:08,197 DEBUG [Socket Reader #1 for port 43931] ipc.Server: got #-33 2016-08-20 21:57:08,198 DEBUG [Socket Reader #1 for port 43931] security.SaslRpcServer: Created SASL server with mechanism = DIGEST-MD5 2016-08-20 21:57:08,198 DEBUG [Socket Reader #1 for port 43931] ipc.Server: Socket Reader #1 for port 43931: responding to null from 127.0.0.1:43416 Call#-33 Retry#-1 2016-08-20 21:57:08,198 DEBUG [Socket Reader #1 for port 43931] ipc.Server: Socket Reader #1 for port 43931: responding to null from 127.0.0.1:43416 Call#-33 Retry#-1 Wrote 166 bytes. 2016-08-20 21:57:08,198 DEBUG [ApplicationMasterLauncher #1] security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@64987d9b 2016-08-20 21:57:08,199 DEBUG [ApplicationMasterLauncher #1] security.NMTokenSelector: Looking for service: 127.0.0.1:43931. Current token is Kind: NMToken, Service: 127.0.0.1:43931, Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 43931 } appSubmitter: "root" keyId: 304809570) 2016-08-20 21:57:08,201 DEBUG [ApplicationMasterLauncher #1] security.SaslRpcClient: Creating SASL DIGEST-MD5(TOKEN) client to authenticate to service at default 2016-08-20 21:57:08,201 INFO [AsyncDispatcher event handler] resourcemanager.RMAppManager$ApplicationSummary: appId=application_1471710419543_0001,name=Test,user=root,queue=default,state=KILLED,trackingUrl=http://localhost:34016/cluster/app/application_1471710419543_0001,appMasterHost=N/A,startTime=1471710426226,finishTime=1471710428165,finalStatus=KILLED,memorySeconds=4814,vcoreSeconds=1,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=,applicationType=YARN 2016-08-20 21:57:08,202 DEBUG [ApplicationMasterLauncher #1] security.SaslRpcClient: Use TOKEN authentication for protocol ContainerManagementProtocolPB 2016-08-20 21:57:08,202 DEBUG [ApplicationMasterLauncher #1] security.SaslRpcClient: SASL client callback: setting username: Cg0KCQgBENe0m8bqKhABEg8KCWxvY2FsaG9zdBCb1wIaBHJvb3Qg4oyskQE= 2016-08-20 21:57:08,202 DEBUG [ApplicationMasterLauncher #1] security.SaslRpcClient: SASL client callback: setting userPassword 2016-08-20 21:57:08,202 DEBUG [ApplicationMasterLauncher #1] security.SaslRpcClient: SASL client callback: setting realm: default 2016-08-20 21:57:08,206 DEBUG [ApplicationMasterLauncher #1] security.SaslRpcClient: Sending sasl message state: INITIATE token: "charset=utf-8,username=\"Cg0KCQgBENe0m8bqKhABEg8KCWxvY2FsaG9zdBCb1wIaBHJvb3Qg4oyskQE=\",realm=\"default\",nonce=\"69fw407eizoOHx/i0FKinf/NJlaRno0ais0N9mlT\",nc=00000001,cnonce=\"GtUC6LCHvQAAU6qNnwKA16NrTKiyqXWjRedLLhZP\",digest-uri=\"/default\",maxbuf=65536,response=43faa1ce1dac8ae8ee8c19e7d29304fc,qop=auth" auths { method: "TOKEN" mechanism: "DIGEST-MD5" protocol: "" serverId: "default" } 2016-08-20 21:57:08,207 DEBUG [Socket Reader #1 for port 43931] ipc.Server: got #-33 2016-08-20 21:57:08,207 DEBUG [Socket Reader #1 for port 43931] ipc.Server: Have read input token of size 298 for processing by saslServer.evaluateResponse() 2016-08-20 21:57:08,207 DEBUG [Socket Reader #1 for port 43931] security.BaseNMTokenSecretManager: creating password for appattempt_1471710419543_0001_000001 for user root to run on NM localhost:43931 2016-08-20 21:57:08,207 DEBUG [Socket Reader #1 for port 43931] security.NMTokenIdentifier: Writing NMTokenIdentifier to RPC layer: appAttemptId { application_id { id: 1 cluster_timestamp: 1471710419543 } attemptId: 1 } nodeId { host: "localhost" port: 43931 } appSubmitter: "root" keyId: 304809570 2016-08-20 21:57:08,208 DEBUG [Socket Reader #1 for port 43931] security.NMTokenSecretManagerInNM: NMToken password retrieved successfully!! 2016-08-20 21:57:08,208 DEBUG [Socket Reader #1 for port 43931] security.SaslRpcServer: SASL server DIGEST-MD5 callback: setting password for client: appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:08,209 DEBUG [Socket Reader #1 for port 43931] security.SaslRpcServer: SASL server DIGEST-MD5 callback: setting canonicalized client ID: appattempt_1471710419543_0001_000001 2016-08-20 21:57:08,209 DEBUG [Socket Reader #1 for port 43931] ipc.Server: Will send SUCCESS token of size 40 from saslServer. 2016-08-20 21:57:08,209 DEBUG [Socket Reader #1 for port 43931] ipc.Server: SASL server context established. Negotiated QoP is auth 2016-08-20 21:57:08,209 DEBUG [Socket Reader #1 for port 43931] ipc.Server: SASL server successfully authenticated client: appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:08,209 INFO [Socket Reader #1 for port 43931] ipc.Server: Auth successful for appattempt_1471710419543_0001_000001 (auth:SIMPLE) 2016-08-20 21:57:08,209 DEBUG [Socket Reader #1 for port 43931] ipc.Server: Socket Reader #1 for port 43931: responding to null from 127.0.0.1:43416 Call#-33 Retry#-1 2016-08-20 21:57:08,209 DEBUG [Socket Reader #1 for port 43931] ipc.Server: Socket Reader #1 for port 43931: responding to null from 127.0.0.1:43416 Call#-33 Retry#-1 Wrote 64 bytes. 2016-08-20 21:57:08,210 DEBUG [ApplicationMasterLauncher #1] ipc.Client: Negotiated QOP is :auth 2016-08-20 21:57:08,212 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:43931 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:43931 from appattempt_1471710419543_0001_000001: starting, having connections 4 2016-08-20 21:57:08,214 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,214 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:08,214 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,214 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:46239 of type STATUS_UPDATE 2016-08-20 21:57:08,215 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,215 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:46239 clusterResources: 2016-08-20 21:57:08,215 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:46239 availableResource: 2016-08-20 21:57:08,215 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,216 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,216 DEBUG [IPC Parameter Sending Thread #0] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:43931 from appattempt_1471710419543_0001_000001 sending #18 2016-08-20 21:57:08,217 DEBUG [Socket Reader #1 for port 43931] ipc.Server: got #-3 2016-08-20 21:57:08,218 DEBUG [Socket Reader #1 for port 43931] ipc.Server: Successfully authorized userInfo { } protocol: "org.apache.hadoop.yarn.api.ContainerManagementProtocolPB" 2016-08-20 21:57:08,218 DEBUG [Socket Reader #1 for port 43931] ipc.Server: got #18 2016-08-20 21:57:08,218 DEBUG [IPC Server handler 0 on 43931] ipc.Server: IPC Server handler 0 on 43931: org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.stopContainers from 127.0.0.1:43416 Call#18 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER 2016-08-20 21:57:08,223 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,223 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 1 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000001, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:08,223 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.CMgrCompletedContainersEvent.EventType: FINISH_CONTAINERS 2016-08-20 21:57:08,224 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerKillEvent.EventType: KILL_CONTAINER 2016-08-20 21:57:08,224 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000001 of type KILL_CONTAINER 2016-08-20 21:57:08,224 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000001 transitioned from RUNNING to KILLING 2016-08-20 21:57:08,224 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEvent.EventType: CLEANUP_CONTAINER 2016-08-20 21:57:08,224 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,224 INFO [AsyncDispatcher event handler] launcher.ContainerLaunch: Cleaning up container container_1471710419543_0001_01_000001 2016-08-20 21:57:08,224 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type STATUS_UPDATE 2016-08-20 21:57:08,226 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.CMgrCompletedAppsEvent.EventType: FINISH_APPS 2016-08-20 21:57:08,227 DEBUG [IPC Server handler 0 on 43931] security.UserGroupInformation: PrivilegedAction as:appattempt_1471710419543_0001_000001 (auth:TOKEN) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419) 2016-08-20 21:57:08,228 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Marking container container_1471710419543_0001_01_000001 as inactive 2016-08-20 21:57:08,228 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Container container_1471710419543_0001_01_000001 is the first container get launched for application application_1471710419543_0001 2016-08-20 21:57:08,228 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Getting pid for container container_1471710419543_0001_01_000001 to kill from pid file /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-1_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000001/container_1471710419543_0001_01_000001.pid 2016-08-20 21:57:08,228 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Accessing pid for container container_1471710419543_0001_01_000001 from pid file /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-1_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000001/container_1471710419543_0001_01_000001.pid 2016-08-20 21:57:08,228 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,230 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:43931 clusterResources: 2016-08-20 21:57:08,230 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:43931 availableResource: 2016-08-20 21:57:08,230 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,231 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,232 INFO [IPC Server handler 0 on 43931] containermanager.ContainerManagerImpl: Stopping container with container Id: container_1471710419543_0001_01_000001 2016-08-20 21:57:08,232 INFO [IPC Server handler 0 on 43931] nodemanager.NMAuditLogger: USER=root IP=127.0.0.1 OPERATION=Stop Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1471710419543_0001 CONTAINERID=container_1471710419543_0001_01_000001 2016-08-20 21:57:08,228 DEBUG [AsyncDispatcher event handler] util.ProcessIdFileReader: Accessing pid from pid file /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-1_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000001/container_1471710419543_0001_01_000001.pid 2016-08-20 21:57:08,234 DEBUG [AsyncDispatcher event handler] util.ProcessIdFileReader: Got pid 3819 from path /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-1_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000001/container_1471710419543_0001_01_000001.pid 2016-08-20 21:57:08,234 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Got pid 3819 for container container_1471710419543_0001_01_000001 2016-08-20 21:57:08,234 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Sending signal to pid 3819 as user root for container container_1471710419543_0001_01_000001 2016-08-20 21:57:08,234 DEBUG [AsyncDispatcher event handler] nodemanager.DefaultContainerExecutor: Sending signal 15 to pid 3819 as user root 2016-08-20 21:57:08,247 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,248 DEBUG [IPC Server handler 0 on 43931] ipc.Server: Served: stopContainers queueTime= 9 procesingTime= 21 2016-08-20 21:57:08,252 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 2 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000002, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000003, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, ]] 2016-08-20 21:57:08,253 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,253 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:08,253 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.CMgrCompletedContainersEvent.EventType: FINISH_CONTAINERS 2016-08-20 21:57:08,253 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.CMgrCompletedAppsEvent.EventType: FINISH_APPS 2016-08-20 21:57:08,253 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerKillEvent.EventType: KILL_CONTAINER 2016-08-20 21:57:08,253 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000002 of type KILL_CONTAINER 2016-08-20 21:57:08,253 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000002 transitioned from RUNNING to KILLING 2016-08-20 21:57:08,253 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerKillEvent.EventType: KILL_CONTAINER 2016-08-20 21:57:08,253 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000003 of type KILL_CONTAINER 2016-08-20 21:57:08,253 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000003 transitioned from RUNNING to KILLING 2016-08-20 21:57:08,253 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationFinishEvent.EventType: FINISH_APPLICATION 2016-08-20 21:57:08,253 DEBUG [AsyncDispatcher event handler] application.ApplicationImpl: Processing application_1471710419543_0001 of type FINISH_APPLICATION 2016-08-20 21:57:08,254 INFO [AsyncDispatcher event handler] application.ApplicationImpl: Application application_1471710419543_0001 transitioned from RUNNING to FINISHING_CONTAINERS_WAIT 2016-08-20 21:57:08,254 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEvent.EventType: CLEANUP_CONTAINER 2016-08-20 21:57:08,254 INFO [AsyncDispatcher event handler] launcher.ContainerLaunch: Cleaning up container container_1471710419543_0001_01_000002 2016-08-20 21:57:08,254 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Marking container container_1471710419543_0001_01_000002 as inactive 2016-08-20 21:57:08,254 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Getting pid for container container_1471710419543_0001_01_000002 to kill from pid file /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000002/container_1471710419543_0001_01_000002.pid 2016-08-20 21:57:08,254 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Accessing pid for container container_1471710419543_0001_01_000002 from pid file /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000002/container_1471710419543_0001_01_000002.pid 2016-08-20 21:57:08,254 DEBUG [AsyncDispatcher event handler] util.ProcessIdFileReader: Accessing pid from pid file /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000002/container_1471710419543_0001_01_000002.pid 2016-08-20 21:57:08,255 DEBUG [AsyncDispatcher event handler] util.ProcessIdFileReader: Got pid 3828 from path /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000002/container_1471710419543_0001_01_000002.pid 2016-08-20 21:57:08,255 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Got pid 3828 for container container_1471710419543_0001_01_000002 2016-08-20 21:57:08,255 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Sending signal to pid 3828 as user root for container container_1471710419543_0001_01_000002 2016-08-20 21:57:08,255 DEBUG [AsyncDispatcher event handler] nodemanager.DefaultContainerExecutor: Sending signal 15 to pid 3828 as user root 2016-08-20 21:57:08,257 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Container container_1471710419543_0001_01_000002 is the first container get launched for application application_1471710419543_0001 2016-08-20 21:57:08,259 DEBUG [IPC Server handler 0 on 43931] ipc.Server: IPC Server handler 0 on 43931: responding to org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.stopContainers from 127.0.0.1:43416 Call#18 Retry#0 2016-08-20 21:57:08,259 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:43931 from appattempt_1471710419543_0001_000001] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:43931 from appattempt_1471710419543_0001_000001 got value #18 2016-08-20 21:57:08,259 DEBUG [ApplicationMasterLauncher #1] ipc.ProtobufRpcEngine: Call: stopContainers took 69ms 2016-08-20 21:57:08,260 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,260 DEBUG [IPC Server handler 0 on 43931] ipc.Server: IPC Server handler 0 on 43931: responding to org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.stopContainers from 127.0.0.1:43416 Call#18 Retry#0 Wrote 51 bytes. 2016-08-20 21:57:08,260 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:08,260 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:08,261 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,261 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,268 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Sent signal SIGTERM to pid 3819 as user root for container container_1471710419543_0001_01_000001, result=success 2016-08-20 21:57:08,269 DEBUG [AsyncDispatcher event handler] security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:327) 2016-08-20 21:57:08,278 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor: Exit code from container container_1471710419543_0001_01_000001 is : 143 2016-08-20 21:57:08,279 DEBUG [ContainersLauncher #0] container.ContainerImpl: Processing container_1471710419543_0001_01_000001 of type UPDATE_DIAGNOSTICS_MSG 2016-08-20 21:57:08,279 DEBUG [ContainersLauncher #0] launcher.ContainerLaunch: Container container_1471710419543_0001_01_000001 completed with exit code 143 2016-08-20 21:57:08,279 DEBUG [ContainersLauncher #0] concurrent.ExecutorHelper: afterExecute in thread: ContainersLauncher #0, runnable type: java.util.concurrent.FutureTask 2016-08-20 21:57:08,305 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor: Exit code from container container_1471710419543_0001_01_000002 is : 143 2016-08-20 21:57:08,305 DEBUG [ContainersLauncher #0] container.ContainerImpl: Processing container_1471710419543_0001_01_000002 of type UPDATE_DIAGNOSTICS_MSG 2016-08-20 21:57:08,305 DEBUG [ContainersLauncher #0] launcher.ContainerLaunch: Container container_1471710419543_0001_01_000002 completed with exit code 143 2016-08-20 21:57:08,305 DEBUG [ContainersLauncher #0] concurrent.ExecutorHelper: afterExecute in thread: ContainersLauncher #0, runnable type: java.util.concurrent.FutureTask 2016-08-20 21:57:08,306 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Sent signal SIGTERM to pid 3828 as user root for container container_1471710419543_0001_01_000002, result=success 2016-08-20 21:57:08,307 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationFinishEvent.EventType: FINISH_APPLICATION 2016-08-20 21:57:08,309 DEBUG [AsyncDispatcher event handler] security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:327) 2016-08-20 21:57:08,308 DEBUG [AsyncDispatcher event handler] application.ApplicationImpl: Processing application_1471710419543_0001 of type FINISH_APPLICATION 2016-08-20 21:57:08,310 INFO [AsyncDispatcher event handler] application.ApplicationImpl: Application application_1471710419543_0001 transitioned from RUNNING to FINISHING_CONTAINERS_WAIT 2016-08-20 21:57:08,311 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerKillEvent.EventType: KILL_CONTAINER 2016-08-20 21:57:08,311 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000001 of type KILL_CONTAINER 2016-08-20 21:57:08,311 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerExitEvent.EventType: CONTAINER_KILLED_ON_REQUEST 2016-08-20 21:57:08,311 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000001 of type CONTAINER_KILLED_ON_REQUEST 2016-08-20 21:57:08,311 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000001 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL 2016-08-20 21:57:08,311 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerKillEvent.EventType: KILL_CONTAINER 2016-08-20 21:57:08,311 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000001 of type KILL_CONTAINER 2016-08-20 21:57:08,312 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.ContainerLocalizationCleanupEvent.EventType: CLEANUP_CONTAINER_RESOURCES 2016-08-20 21:57:08,313 DEBUG [DeletionService #0] concurrent.HadoopScheduledThreadPoolExecutor: beforeExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,314 DEBUG [DeletionService #0] nodemanager.DeletionService: FileDeletionTask : user : root subDir : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-1_0/usercache/root/appcache/application_1471710419543_0001/container_1471710419543_0001_01_000001 baseDir : null 2016-08-20 21:57:08,314 DEBUG [DeletionService #0] nodemanager.DeletionService: Deleting path: [/opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-1_0/usercache/root/appcache/application_1471710419543_0001/container_1471710419543_0001_01_000001] as user: [root] 2016-08-20 21:57:08,314 INFO [DeletionService #0] nodemanager.DefaultContainerExecutor: Deleting absolute path : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-1_0/usercache/root/appcache/application_1471710419543_0001/container_1471710419543_0001_01_000001 2016-08-20 21:57:08,314 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,314 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:08,315 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,315 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:46239 of type STATUS_UPDATE 2016-08-20 21:57:08,315 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,315 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:46239 clusterResources: 2016-08-20 21:57:08,315 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:46239 availableResource: 2016-08-20 21:57:08,315 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,315 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,316 DEBUG [DeletionService #0] concurrent.ExecutorHelper: afterExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,316 DEBUG [DeletionService #0] concurrent.HadoopScheduledThreadPoolExecutor: beforeExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,316 DEBUG [DeletionService #0] nodemanager.DeletionService: FileDeletionTask : user : null subDir : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-1_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000001 baseDir : null 2016-08-20 21:57:08,316 DEBUG [DeletionService #0] nodemanager.DeletionService: NM deleting absolute path : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-1_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000001 2016-08-20 21:57:08,317 DEBUG [Task killer for 3843] nodemanager.DefaultContainerExecutor: Sending signal 9 to pid 3843 as user root 2016-08-20 21:57:08,317 DEBUG [DeletionService #0] concurrent.ExecutorHelper: afterExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,317 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEvent.EventType: CONTAINER_RESOURCES_CLEANEDUP 2016-08-20 21:57:08,317 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000001 of type CONTAINER_RESOURCES_CLEANEDUP 2016-08-20 21:57:08,317 INFO [AsyncDispatcher event handler] nodemanager.NMAuditLogger: USER=root OPERATION=Container Finished - Killed TARGET=ContainerImpl RESULT=SUCCESS APPID=application_1471710419543_0001 CONTAINERID=container_1471710419543_0001_01_000001 2016-08-20 21:57:08,322 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000001 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE 2016-08-20 21:57:08,322 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationContainerFinishedEvent.EventType: APPLICATION_CONTAINER_FINISHED 2016-08-20 21:57:08,322 DEBUG [AsyncDispatcher event handler] application.ApplicationImpl: Processing application_1471710419543_0001 of type APPLICATION_CONTAINER_FINISHED 2016-08-20 21:57:08,322 INFO [AsyncDispatcher event handler] application.ApplicationImpl: Removing container_1471710419543_0001_01_000001 from application application_1471710419543_0001 2016-08-20 21:57:08,323 INFO [AsyncDispatcher event handler] application.ApplicationImpl: Application application_1471710419543_0001 transitioned from FINISHING_CONTAINERS_WAIT to APPLICATION_RESOURCES_CLEANINGUP 2016-08-20 21:57:08,323 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerStopMonitoringEvent.EventType: STOP_MONITORING_CONTAINER 2016-08-20 21:57:08,323 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.event.LogHandlerContainerFinishedEvent.EventType: CONTAINER_FINISHED 2016-08-20 21:57:08,323 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEvent.EventType: CONTAINER_STOP 2016-08-20 21:57:08,323 INFO [AsyncDispatcher event handler] containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1471710419543_0001 2016-08-20 21:57:08,323 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.ApplicationLocalizationEvent.EventType: DESTROY_APPLICATION_RESOURCES 2016-08-20 21:57:08,324 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,324 DEBUG [DeletionService #0] concurrent.HadoopScheduledThreadPoolExecutor: beforeExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,324 DEBUG [DeletionService #0] nodemanager.DeletionService: FileDeletionTask : user : root subDir : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-1_0/usercache/root/appcache/application_1471710419543_0001 baseDir : null 2016-08-20 21:57:08,324 DEBUG [DeletionService #0] nodemanager.DeletionService: Deleting path: [/opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-1_0/usercache/root/appcache/application_1471710419543_0001] as user: [root] 2016-08-20 21:57:08,324 INFO [DeletionService #0] nodemanager.DefaultContainerExecutor: Deleting absolute path : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-1_0/usercache/root/appcache/application_1471710419543_0001 2016-08-20 21:57:08,325 DEBUG [DeletionService #0] concurrent.ExecutorHelper: afterExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,326 DEBUG [DeletionService #0] concurrent.HadoopScheduledThreadPoolExecutor: beforeExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,326 DEBUG [DeletionService #0] nodemanager.DeletionService: FileDeletionTask : user : null subDir : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-1_0/nmPrivate/application_1471710419543_0001 baseDir : null 2016-08-20 21:57:08,326 DEBUG [DeletionService #0] nodemanager.DeletionService: NM deleting absolute path : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-1_0/nmPrivate/application_1471710419543_0001 2016-08-20 21:57:08,326 DEBUG [DeletionService #0] concurrent.ExecutorHelper: afterExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,327 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEvent.EventType: APPLICATION_STOP 2016-08-20 21:57:08,327 INFO [AsyncDispatcher event handler] containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1471710419543_0001 2016-08-20 21:57:08,327 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationEvent.EventType: APPLICATION_RESOURCES_CLEANEDUP 2016-08-20 21:57:08,328 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: application_1471710419543_0001 is completing, remove container_1471710419543_0001_01_000001 from NM context. 2016-08-20 21:57:08,329 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 1 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000001, ExecutionType: GUARANTEED, State: COMPLETE, Capability: , Diagnostics: Container Killed by ResourceManager Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143. , ExitStatus: -106, ]] 2016-08-20 21:57:08,329 DEBUG [AsyncDispatcher event handler] application.ApplicationImpl: Processing application_1471710419543_0001 of type APPLICATION_RESOURCES_CLEANEDUP 2016-08-20 21:57:08,329 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,329 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type STATUS_UPDATE 2016-08-20 21:57:08,330 DEBUG [AsyncDispatcher event handler] security.NMTokenSecretManagerInNM: Removing application attempts NMToken keys for application application_1471710419543_0001 2016-08-20 21:57:08,331 INFO [AsyncDispatcher event handler] application.ApplicationImpl: Application application_1471710419543_0001 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED 2016-08-20 21:57:08,331 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.event.LogHandlerAppFinishedEvent.EventType: APPLICATION_FINISHED 2016-08-20 21:57:08,331 INFO [AsyncDispatcher event handler] loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1471710419543_0001, with delay of 1 seconds 2016-08-20 21:57:08,341 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,341 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:43931 clusterResources: 2016-08-20 21:57:08,341 INFO [SchedulerEventDispatcher:Event Processor] scheduler.AbstractYarnScheduler: Container container_1471710419543_0001_01_000001 completed with event FINISHED, but corresponding RMContainer doesn't exist. 2016-08-20 21:57:08,341 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:43931 availableResource: 2016-08-20 21:57:08,342 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,342 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,353 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,353 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEvent.EventType: CLEANUP_CONTAINER 2016-08-20 21:57:08,353 INFO [AsyncDispatcher event handler] launcher.ContainerLaunch: Cleaning up container container_1471710419543_0001_01_000003 2016-08-20 21:57:08,353 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Marking container container_1471710419543_0001_01_000003 as inactive 2016-08-20 21:57:08,353 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: application_1471710419543_0001 is completing, remove container_1471710419543_0001_01_000004 from NM context. 2016-08-20 21:57:08,353 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 3 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000002, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: Container Killed by ResourceManager Container killed on request. Exit code is 143 , ExitStatus: -106, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000003, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: Container Killed by ResourceManager , ExitStatus: -106, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000004, ExecutionType: GUARANTEED, State: COMPLETE, Capability: , Diagnostics: Container Killed by ResourceManager Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143. , ExitStatus: -106, ]] 2016-08-20 21:57:08,354 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,354 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:08,353 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Getting pid for container container_1471710419543_0001_01_000003 to kill from pid file /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000003/container_1471710419543_0001_01_000003.pid 2016-08-20 21:57:08,356 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Accessing pid for container container_1471710419543_0001_01_000003 from pid file /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000003/container_1471710419543_0001_01_000003.pid 2016-08-20 21:57:08,356 DEBUG [AsyncDispatcher event handler] util.ProcessIdFileReader: Accessing pid from pid file /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000003/container_1471710419543_0001_01_000003.pid 2016-08-20 21:57:08,357 DEBUG [IPC Parameter Sending Thread #0] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:44193 from root sending #19 2016-08-20 21:57:08,360 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,361 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:08,361 DEBUG [Socket Reader #1 for port 44193] ipc.Server: got #19 2016-08-20 21:57:08,361 INFO [SchedulerEventDispatcher:Event Processor] scheduler.AbstractYarnScheduler: Container container_1471710419543_0001_01_000004 completed with event FINISHED, but corresponding RMContainer doesn't exist. 2016-08-20 21:57:08,361 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:08,361 DEBUG [IPC Server handler 5 on 44193] ipc.Server: IPC Server handler 5 on 44193: org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.forceKillApplication from 127.0.0.1:58714 Call#19 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER 2016-08-20 21:57:08,361 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,361 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,361 DEBUG [AsyncDispatcher event handler] util.ProcessIdFileReader: Got pid 3836 from path /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000003/container_1471710419543_0001_01_000003.pid 2016-08-20 21:57:08,362 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Got pid 3836 for container container_1471710419543_0001_01_000003 2016-08-20 21:57:08,362 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Sending signal to pid 3836 as user root for container container_1471710419543_0001_01_000003 2016-08-20 21:57:08,362 DEBUG [AsyncDispatcher event handler] nodemanager.DefaultContainerExecutor: Sending signal 15 to pid 3836 as user root 2016-08-20 21:57:08,370 DEBUG [IPC Server handler 5 on 44193] security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419) 2016-08-20 21:57:08,372 DEBUG [IPC Server handler 5 on 44193] security.ApplicationACLsManager: Verifying access-type MODIFY_APP for root (auth:SIMPLE) on application application_1471710419543_0001 owned by root 2016-08-20 21:57:08,372 DEBUG [IPC Server handler 5 on 44193] ipc.Server: Served: forceKillApplication queueTime= 11 procesingTime= 0 2016-08-20 21:57:08,375 DEBUG [IPC Server handler 5 on 44193] ipc.Server: IPC Server handler 5 on 44193: responding to org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.forceKillApplication from 127.0.0.1:58714 Call#19 Retry#0 2016-08-20 21:57:08,375 DEBUG [IPC Server handler 5 on 44193] ipc.Server: IPC Server handler 5 on 44193: responding to org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.forceKillApplication from 127.0.0.1:58714 Call#19 Retry#0 Wrote 34 bytes. 2016-08-20 21:57:08,380 DEBUG [IPC Client (1470959992) connection to localhost/127.0.0.1:44193 from root] ipc.Client: IPC Client (1470959992) connection to localhost/127.0.0.1:44193 from root got value #19 2016-08-20 21:57:08,380 DEBUG [main] ipc.ProtobufRpcEngine: Call: forceKillApplication took 23ms 2016-08-20 21:57:08,381 INFO [main] impl.YarnClientImpl: Killed application application_1471710419543_0001 2016-08-20 21:57:08,396 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor: Exit code from container container_1471710419543_0001_01_000003 is : 143 2016-08-20 21:57:08,405 DEBUG [ContainersLauncher #1] container.ContainerImpl: Processing container_1471710419543_0001_01_000003 of type UPDATE_DIAGNOSTICS_MSG 2016-08-20 21:57:08,405 DEBUG [ContainersLauncher #1] launcher.ContainerLaunch: Container container_1471710419543_0001_01_000003 completed with exit code 143 2016-08-20 21:57:08,405 DEBUG [ContainersLauncher #1] concurrent.ExecutorHelper: afterExecute in thread: ContainersLauncher #1, runnable type: java.util.concurrent.FutureTask 2016-08-20 21:57:08,398 DEBUG [AsyncDispatcher event handler] launcher.ContainerLaunch: Sent signal SIGTERM to pid 3836 as user root for container container_1471710419543_0001_01_000003, result=success 2016-08-20 21:57:08,413 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state STOPPED 2016-08-20 21:57:08,414 DEBUG [main] ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@21109725 2016-08-20 21:57:08,414 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.client.api.impl.TestAMRMClient entered state STOPPED 2016-08-20 21:57:08,414 DEBUG [main] service.CompositeService: org.apache.hadoop.yarn.client.api.impl.TestAMRMClient: stopping services, size=4 2016-08-20 21:57:08,414 DEBUG [main] service.CompositeService: Stopping service #3: Service org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper_2 in state org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper_2: STARTED 2016-08-20 21:57:08,415 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,415 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:08,415 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,415 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:46239 of type STATUS_UPDATE 2016-08-20 21:57:08,416 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,416 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:46239 clusterResources: 2016-08-20 21:57:08,416 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:46239 availableResource: 2016-08-20 21:57:08,416 DEBUG [AsyncDispatcher event handler] security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:327) 2016-08-20 21:57:08,416 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,416 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,419 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper_2 entered state STOPPED 2016-08-20 21:57:08,420 DEBUG [main] service.AbstractService: Service: NodeManager entered state STOPPED 2016-08-20 21:57:08,420 DEBUG [main] service.CompositeService: NodeManager: stopping services, size=8 2016-08-20 21:57:08,420 DEBUG [main] service.CompositeService: Stopping service #7: Service org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl in state org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: STARTED 2016-08-20 21:57:08,421 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl entered state STOPPED 2016-08-20 21:57:08,429 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,429 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:08,430 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,430 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type STATUS_UPDATE 2016-08-20 21:57:08,430 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,430 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:43931 clusterResources: 2016-08-20 21:57:08,430 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:43931 availableResource: 2016-08-20 21:57:08,430 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,430 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,433 INFO [main] nodemanager.NodeStatusUpdaterImpl: Successfully Unregistered the Node localhost:46239 with ResourceManager. 2016-08-20 21:57:08,433 DEBUG [main] service.CompositeService: Stopping service #6: Service org.apache.hadoop.util.JvmPauseMonitor in state org.apache.hadoop.util.JvmPauseMonitor: STARTED 2016-08-20 21:57:08,433 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.util.JvmPauseMonitor entered state STOPPED 2016-08-20 21:57:08,436 DEBUG [main] service.CompositeService: Stopping service #5: Service Dispatcher in state Dispatcher: STARTED 2016-08-20 21:57:08,436 DEBUG [main] service.AbstractService: Service: Dispatcher entered state STOPPED 2016-08-20 21:57:08,436 DEBUG [main] service.CompositeService: Stopping service #4: Service org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer in state org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer: STARTED 2016-08-20 21:57:08,436 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer entered state STOPPED 2016-08-20 21:57:08,437 DEBUG [main] webapp.WebServer: Stopping webapp 2016-08-20 21:57:08,444 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.nio.SelectChannelConnector$1@363c32cc 2016-08-20 21:57:08,446 DEBUG [main] mortbay.log: stopping org.mortbay.jetty.webapp.WebAppContext@57a6a933{/,jar:file:/opt/repo/org/apache/hadoop/hadoop-yarn-common/3.0.0-alpha2-SNAPSHOT/hadoop-yarn-common-3.0.0-alpha2-SNAPSHOT.jar!/webapps/node} 2016-08-20 21:57:08,454 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,455 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 2 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000002, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: Container Killed by ResourceManager Container killed on request. Exit code is 143 , ExitStatus: -106, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000003, ExecutionType: GUARANTEED, State: RUNNING, Capability: , Diagnostics: Container Killed by ResourceManager Container killed on request. Exit code is 143 , ExitStatus: -106, ]] 2016-08-20 21:57:08,455 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,455 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:08,456 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,456 DEBUG [main] mortbay.log: stopping SessionHandler@5b5b59 2016-08-20 21:57:08,458 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:08,458 DEBUG [main] mortbay.log: stopping SecurityHandler@1934ad7c 2016-08-20 21:57:08,458 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:08,459 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,459 DEBUG [main] mortbay.log: stopping ServletHandler@b27b210 2016-08-20 21:57:08,459 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,459 DEBUG [main] mortbay.log: stopped guice 2016-08-20 21:57:08,459 DEBUG [main] mortbay.log: stopped org.apache.hadoop.security.http.XFrameOptionsFilter 2016-08-20 21:57:08,459 DEBUG [main] mortbay.log: stopped static_user_filter 2016-08-20 21:57:08,459 DEBUG [main] mortbay.log: stopped safety 2016-08-20 21:57:08,459 DEBUG [main] mortbay.log: stopped NoCacheFilter 2016-08-20 21:57:08,459 DEBUG [main] mortbay.log: stopped NoCacheFilter 2016-08-20 21:57:08,460 DEBUG [main] mortbay.log: stopped conf 2016-08-20 21:57:08,460 DEBUG [main] mortbay.log: stopped jmx 2016-08-20 21:57:08,460 DEBUG [main] mortbay.log: stopped logLevel 2016-08-20 21:57:08,460 DEBUG [main] mortbay.log: stopped stacks 2016-08-20 21:57:08,460 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.DefaultServlet$NIOResourceCache@69aa7d76 2016-08-20 21:57:08,460 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.DefaultServlet-50297459 2016-08-20 21:57:08,460 DEBUG [main] mortbay.log: stopped ServletHandler@b27b210 2016-08-20 21:57:08,460 DEBUG [main] mortbay.log: stopped SecurityHandler@1934ad7c 2016-08-20 21:57:08,460 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.HashSessionManager@33f17289 2016-08-20 21:57:08,462 DEBUG [main] mortbay.log: stopped SessionHandler@5b5b59 2016-08-20 21:57:08,462 DEBUG [main] mortbay.log: stopping ErrorPageErrorHandler@f1266c6 2016-08-20 21:57:08,462 DEBUG [main] mortbay.log: stopped ErrorPageErrorHandler@f1266c6 2016-08-20 21:57:08,463 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - guice as filter 2016-08-20 21:57:08,463 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - org.apache.hadoop.security.http.XFrameOptionsFilter as filter 2016-08-20 21:57:08,463 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - static_user_filter as filter 2016-08-20 21:57:08,463 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - safety as filter 2016-08-20 21:57:08,463 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - NoCacheFilter as filter 2016-08-20 21:57:08,463 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - NoCacheFilter as filter 2016-08-20 21:57:08,463 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - (F=guice,[/*],[],15) as filterMapping 2016-08-20 21:57:08,463 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - (F=org.apache.hadoop.security.http.XFrameOptionsFilter,[/*],[],15) as filterMapping 2016-08-20 21:57:08,463 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - (F=static_user_filter,[/ws/*],[],15) as filterMapping 2016-08-20 21:57:08,463 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - (F=static_user_filter,[/node/*],[],15) as filterMapping 2016-08-20 21:57:08,463 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - (F=static_user_filter,[/conf],[],15) as filterMapping 2016-08-20 21:57:08,463 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - (F=static_user_filter,[/jmx],[],15) as filterMapping 2016-08-20 21:57:08,463 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - (F=static_user_filter,[/logLevel],[],15) as filterMapping 2016-08-20 21:57:08,463 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - (F=static_user_filter,[/stacks],[],15) as filterMapping 2016-08-20 21:57:08,463 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - (F=static_user_filter,[*.html, *.jsp],[],15) as filterMapping 2016-08-20 21:57:08,463 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - (F=safety,[/*],[],15) as filterMapping 2016-08-20 21:57:08,464 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - (F=NoCacheFilter,[/*],[],15) as filterMapping 2016-08-20 21:57:08,464 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - (F=NoCacheFilter,[/*],[],15) as filterMapping 2016-08-20 21:57:08,464 DEBUG [main] mortbay.log: filterNameMap=null 2016-08-20 21:57:08,464 DEBUG [main] mortbay.log: pathFilters=null 2016-08-20 21:57:08,468 DEBUG [main] mortbay.log: servletFilterMap=null 2016-08-20 21:57:08,466 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerKillEvent.EventType: KILL_CONTAINER 2016-08-20 21:57:08,468 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000002 of type KILL_CONTAINER 2016-08-20 21:57:08,469 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerKillEvent.EventType: KILL_CONTAINER 2016-08-20 21:57:08,469 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000003 of type KILL_CONTAINER 2016-08-20 21:57:08,469 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerExitEvent.EventType: CONTAINER_KILLED_ON_REQUEST 2016-08-20 21:57:08,469 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000002 of type CONTAINER_KILLED_ON_REQUEST 2016-08-20 21:57:08,470 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000002 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL 2016-08-20 21:57:08,470 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerExitEvent.EventType: CONTAINER_KILLED_ON_REQUEST 2016-08-20 21:57:08,470 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000003 of type CONTAINER_KILLED_ON_REQUEST 2016-08-20 21:57:08,470 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000003 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL 2016-08-20 21:57:08,470 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.ContainerLocalizationCleanupEvent.EventType: CLEANUP_CONTAINER_RESOURCES 2016-08-20 21:57:08,468 DEBUG [main] mortbay.log: servletPathMap={/jmx=jmx, /conf=conf, /stacks=stacks, /logLevel=logLevel, /=org.mortbay.jetty.servlet.DefaultServlet-50297459} 2016-08-20 21:57:08,471 DEBUG [main] mortbay.log: servletNameMap={logLevel=logLevel, jmx=jmx, stacks=stacks, org.mortbay.jetty.servlet.DefaultServlet-50297459=org.mortbay.jetty.servlet.DefaultServlet-50297459, conf=conf} 2016-08-20 21:57:08,471 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - conf as servlet 2016-08-20 21:57:08,471 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - jmx as servlet 2016-08-20 21:57:08,472 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - logLevel as servlet 2016-08-20 21:57:08,472 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - stacks as servlet 2016-08-20 21:57:08,472 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - org.mortbay.jetty.servlet.DefaultServlet-50297459 as servlet 2016-08-20 21:57:08,472 DEBUG [DeletionService #1] concurrent.HadoopScheduledThreadPoolExecutor: beforeExecute in thread: DeletionService #1, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,472 DEBUG [DeletionService #1] nodemanager.DeletionService: FileDeletionTask : user : root subDir : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/usercache/root/appcache/application_1471710419543_0001/container_1471710419543_0001_01_000002 baseDir : null 2016-08-20 21:57:08,472 DEBUG [DeletionService #1] nodemanager.DeletionService: Deleting path: [/opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/usercache/root/appcache/application_1471710419543_0001/container_1471710419543_0001_01_000002] as user: [root] 2016-08-20 21:57:08,472 INFO [DeletionService #1] nodemanager.DefaultContainerExecutor: Deleting absolute path : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/usercache/root/appcache/application_1471710419543_0001/container_1471710419543_0001_01_000002 2016-08-20 21:57:08,473 DEBUG [DeletionService #0] concurrent.HadoopScheduledThreadPoolExecutor: beforeExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,473 DEBUG [DeletionService #0] nodemanager.DeletionService: FileDeletionTask : user : null subDir : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000002 baseDir : null 2016-08-20 21:57:08,473 DEBUG [DeletionService #0] nodemanager.DeletionService: NM deleting absolute path : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000002 2016-08-20 21:57:08,473 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.ContainerLocalizationCleanupEvent.EventType: CLEANUP_CONTAINER_RESOURCES 2016-08-20 21:57:08,473 DEBUG [DeletionService #1] concurrent.ExecutorHelper: afterExecute in thread: DeletionService #1, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,474 DEBUG [DeletionService #1] concurrent.HadoopScheduledThreadPoolExecutor: beforeExecute in thread: DeletionService #1, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,474 DEBUG [DeletionService #1] nodemanager.DeletionService: FileDeletionTask : user : root subDir : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/usercache/root/appcache/application_1471710419543_0001/container_1471710419543_0001_01_000003 baseDir : null 2016-08-20 21:57:08,474 DEBUG [DeletionService #1] nodemanager.DeletionService: Deleting path: [/opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/usercache/root/appcache/application_1471710419543_0001/container_1471710419543_0001_01_000003] as user: [root] 2016-08-20 21:57:08,474 INFO [DeletionService #1] nodemanager.DefaultContainerExecutor: Deleting absolute path : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/usercache/root/appcache/application_1471710419543_0001/container_1471710419543_0001_01_000003 2016-08-20 21:57:08,474 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEvent.EventType: CONTAINER_RESOURCES_CLEANEDUP 2016-08-20 21:57:08,474 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000002 of type CONTAINER_RESOURCES_CLEANEDUP 2016-08-20 21:57:08,474 INFO [AsyncDispatcher event handler] nodemanager.NMAuditLogger: USER=root OPERATION=Container Finished - Killed TARGET=ContainerImpl RESULT=SUCCESS APPID=application_1471710419543_0001 CONTAINERID=container_1471710419543_0001_01_000002 2016-08-20 21:57:08,473 DEBUG [DeletionService #0] concurrent.ExecutorHelper: afterExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,479 DEBUG [DeletionService #1] concurrent.ExecutorHelper: afterExecute in thread: DeletionService #1, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,477 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000002 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE 2016-08-20 21:57:08,479 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEvent.EventType: CONTAINER_RESOURCES_CLEANEDUP 2016-08-20 21:57:08,479 DEBUG [AsyncDispatcher event handler] container.ContainerImpl: Processing container_1471710419543_0001_01_000003 of type CONTAINER_RESOURCES_CLEANEDUP 2016-08-20 21:57:08,479 INFO [AsyncDispatcher event handler] nodemanager.NMAuditLogger: USER=root OPERATION=Container Finished - Killed TARGET=ContainerImpl RESULT=SUCCESS APPID=application_1471710419543_0001 CONTAINERID=container_1471710419543_0001_01_000003 2016-08-20 21:57:08,478 DEBUG [DeletionService #2] concurrent.HadoopScheduledThreadPoolExecutor: beforeExecute in thread: DeletionService #2, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,481 DEBUG [DeletionService #2] nodemanager.DeletionService: FileDeletionTask : user : null subDir : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000003 baseDir : null 2016-08-20 21:57:08,481 DEBUG [DeletionService #2] nodemanager.DeletionService: NM deleting absolute path : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001/container_1471710419543_0001_01_000003 2016-08-20 21:57:08,482 DEBUG [DeletionService #2] concurrent.ExecutorHelper: afterExecute in thread: DeletionService #2, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,477 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - (S=conf,[/conf]) as servletMapping 2016-08-20 21:57:08,477 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,483 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: application_1471710419543_0001 is completing, remove container_1471710419543_0001_01_000002 from NM context. 2016-08-20 21:57:08,481 INFO [AsyncDispatcher event handler] container.ContainerImpl: Container container_1471710419543_0001_01_000003 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE 2016-08-20 21:57:08,483 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationContainerFinishedEvent.EventType: APPLICATION_CONTAINER_FINISHED 2016-08-20 21:57:08,483 DEBUG [AsyncDispatcher event handler] application.ApplicationImpl: Processing application_1471710419543_0001 of type APPLICATION_CONTAINER_FINISHED 2016-08-20 21:57:08,483 INFO [AsyncDispatcher event handler] application.ApplicationImpl: Removing container_1471710419543_0001_01_000002 from application application_1471710419543_0001 2016-08-20 21:57:08,483 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerStopMonitoringEvent.EventType: STOP_MONITORING_CONTAINER 2016-08-20 21:57:08,483 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.event.LogHandlerContainerFinishedEvent.EventType: CONTAINER_FINISHED 2016-08-20 21:57:08,483 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEvent.EventType: CONTAINER_STOP 2016-08-20 21:57:08,483 INFO [AsyncDispatcher event handler] containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1471710419543_0001 2016-08-20 21:57:08,483 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationContainerFinishedEvent.EventType: APPLICATION_CONTAINER_FINISHED 2016-08-20 21:57:08,483 DEBUG [AsyncDispatcher event handler] application.ApplicationImpl: Processing application_1471710419543_0001 of type APPLICATION_CONTAINER_FINISHED 2016-08-20 21:57:08,483 INFO [AsyncDispatcher event handler] application.ApplicationImpl: Removing container_1471710419543_0001_01_000003 from application application_1471710419543_0001 2016-08-20 21:57:08,483 INFO [AsyncDispatcher event handler] application.ApplicationImpl: Application application_1471710419543_0001 transitioned from FINISHING_CONTAINERS_WAIT to APPLICATION_RESOURCES_CLEANINGUP 2016-08-20 21:57:08,483 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerStopMonitoringEvent.EventType: STOP_MONITORING_CONTAINER 2016-08-20 21:57:08,484 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: application_1471710419543_0001 is completing, remove container_1471710419543_0001_01_000003 from NM context. 2016-08-20 21:57:08,484 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.event.LogHandlerContainerFinishedEvent.EventType: CONTAINER_FINISHED 2016-08-20 21:57:08,484 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEvent.EventType: CONTAINER_STOP 2016-08-20 21:57:08,484 INFO [AsyncDispatcher event handler] containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1471710419543_0001 2016-08-20 21:57:08,484 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.ApplicationLocalizationEvent.EventType: DESTROY_APPLICATION_RESOURCES 2016-08-20 21:57:08,484 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 2 container statuses: [ContainerStatus: [ContainerId: container_1471710419543_0001_01_000002, ExecutionType: GUARANTEED, State: COMPLETE, Capability: , Diagnostics: Container Killed by ResourceManager Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143. , ExitStatus: -106, ], ContainerStatus: [ContainerId: container_1471710419543_0001_01_000003, ExecutionType: GUARANTEED, State: COMPLETE, Capability: , Diagnostics: Container Killed by ResourceManager Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143. , ExitStatus: -106, ]] 2016-08-20 21:57:08,485 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEvent.EventType: APPLICATION_STOP 2016-08-20 21:57:08,485 INFO [AsyncDispatcher event handler] containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1471710419543_0001 2016-08-20 21:57:08,485 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationEvent.EventType: APPLICATION_RESOURCES_CLEANEDUP 2016-08-20 21:57:08,485 DEBUG [AsyncDispatcher event handler] application.ApplicationImpl: Processing application_1471710419543_0001 of type APPLICATION_RESOURCES_CLEANEDUP 2016-08-20 21:57:08,486 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,486 DEBUG [DeletionService #0] concurrent.HadoopScheduledThreadPoolExecutor: beforeExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,486 DEBUG [DeletionService #0] nodemanager.DeletionService: FileDeletionTask : user : root subDir : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/usercache/root/appcache/application_1471710419543_0001 baseDir : null 2016-08-20 21:57:08,487 DEBUG [DeletionService #0] nodemanager.DeletionService: Deleting path: [/opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/usercache/root/appcache/application_1471710419543_0001] as user: [root] 2016-08-20 21:57:08,487 INFO [DeletionService #0] nodemanager.DefaultContainerExecutor: Deleting absolute path : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/usercache/root/appcache/application_1471710419543_0001 2016-08-20 21:57:08,484 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - (S=jmx,[/jmx]) as servletMapping 2016-08-20 21:57:08,486 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:08,487 DEBUG [DeletionService #0] concurrent.ExecutorHelper: afterExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,487 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,486 DEBUG [AsyncDispatcher event handler] security.NMTokenSecretManagerInNM: Removing application attempts NMToken keys for application application_1471710419543_0001 2016-08-20 21:57:08,487 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:08,487 INFO [AsyncDispatcher event handler] application.ApplicationImpl: Application application_1471710419543_0001 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED 2016-08-20 21:57:08,487 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.event.LogHandlerAppFinishedEvent.EventType: APPLICATION_FINISHED 2016-08-20 21:57:08,487 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - (S=logLevel,[/logLevel]) as servletMapping 2016-08-20 21:57:08,487 DEBUG [DeletionService #0] concurrent.HadoopScheduledThreadPoolExecutor: beforeExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,487 DEBUG [DeletionService #0] nodemanager.DeletionService: FileDeletionTask : user : null subDir : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001 baseDir : null 2016-08-20 21:57:08,487 DEBUG [DeletionService #0] nodemanager.DeletionService: NM deleting absolute path : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-localDir-nm-0_0/nmPrivate/application_1471710419543_0001 2016-08-20 21:57:08,487 INFO [AsyncDispatcher event handler] loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1471710419543_0001, with delay of 1 seconds 2016-08-20 21:57:08,487 INFO [SchedulerEventDispatcher:Event Processor] scheduler.AbstractYarnScheduler: Container container_1471710419543_0001_01_000002 completed with event FINISHED, but corresponding RMContainer doesn't exist. 2016-08-20 21:57:08,488 INFO [SchedulerEventDispatcher:Event Processor] scheduler.AbstractYarnScheduler: Container container_1471710419543_0001_01_000003 completed with event FINISHED, but corresponding RMContainer doesn't exist. 2016-08-20 21:57:08,488 DEBUG [DeletionService #0] concurrent.ExecutorHelper: afterExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:08,488 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:08,488 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - (S=stacks,[/stacks]) as servletMapping 2016-08-20 21:57:08,488 DEBUG [main] mortbay.log: Container ServletHandler@b27b210 - (S=org.mortbay.jetty.servlet.DefaultServlet-50297459,[/]) as servletMapping 2016-08-20 21:57:08,488 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,488 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,488 DEBUG [main] mortbay.log: filterNameMap=null 2016-08-20 21:57:08,488 DEBUG [main] mortbay.log: pathFilters=null 2016-08-20 21:57:08,489 DEBUG [main] mortbay.log: servletFilterMap=null 2016-08-20 21:57:08,489 DEBUG [main] mortbay.log: servletPathMap=null 2016-08-20 21:57:08,489 DEBUG [main] mortbay.log: servletNameMap=null 2016-08-20 21:57:08,490 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.webapp.WebAppContext@57a6a933{/,jar:file:/opt/repo/org/apache/hadoop/hadoop-yarn-common/3.0.0-alpha2-SNAPSHOT/hadoop-yarn-common-3.0.0-alpha2-SNAPSHOT.jar!/webapps/node} 2016-08-20 21:57:08,490 INFO [main] mortbay.log: Stopped SelectChannelConnector@localhost:0 2016-08-20 21:57:08,490 DEBUG [main] mortbay.log: stopped SelectChannelConnector@localhost:0 2016-08-20 21:57:08,491 DEBUG [main] mortbay.log: stopping Server@41ccb3b9 2016-08-20 21:57:08,491 DEBUG [main] mortbay.log: stopping ContextHandlerCollection@53d9826f 2016-08-20 21:57:08,491 DEBUG [main] mortbay.log: stopping org.mortbay.jetty.servlet.Context@38f77cd9{/static,jar:file:/opt/repo/org/apache/hadoop/hadoop-yarn-common/3.0.0-alpha2-SNAPSHOT/hadoop-yarn-common-3.0.0-alpha2-SNAPSHOT.jar!/webapps/static} 2016-08-20 21:57:08,491 DEBUG [main] mortbay.log: stopping SessionHandler@1e84f3c8 2016-08-20 21:57:08,491 DEBUG [main] mortbay.log: stopping ServletHandler@5f59ea8c 2016-08-20 21:57:08,491 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.DefaultServlet-31567969 2016-08-20 21:57:08,492 DEBUG [main] mortbay.log: stopped ServletHandler@5f59ea8c 2016-08-20 21:57:08,492 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.HashSessionManager@7b2ccba5 2016-08-20 21:57:08,493 DEBUG [main] mortbay.log: stopped SessionHandler@1e84f3c8 2016-08-20 21:57:08,493 DEBUG [main] mortbay.log: stopping ErrorHandler@64f9f455 2016-08-20 21:57:08,493 DEBUG [main] mortbay.log: stopped ErrorHandler@64f9f455 2016-08-20 21:57:08,493 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.Context@38f77cd9{/static,jar:file:/opt/repo/org/apache/hadoop/hadoop-yarn-common/3.0.0-alpha2-SNAPSHOT/hadoop-yarn-common-3.0.0-alpha2-SNAPSHOT.jar!/webapps/static} 2016-08-20 21:57:08,493 DEBUG [main] mortbay.log: stopping org.mortbay.jetty.servlet.Context@43acd79e{/logs,file:/opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/log} 2016-08-20 21:57:08,493 DEBUG [main] mortbay.log: stopping SessionHandler@5d5a51b1 2016-08-20 21:57:08,493 DEBUG [main] mortbay.log: stopping ServletHandler@4dc7cd1c 2016-08-20 21:57:08,493 DEBUG [main] mortbay.log: stopped org.apache.hadoop.http.AdminAuthorizedServlet-981012032 2016-08-20 21:57:08,493 DEBUG [main] mortbay.log: stopped ServletHandler@4dc7cd1c 2016-08-20 21:57:08,494 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.HashSessionManager@50ab56e2 2016-08-20 21:57:08,495 DEBUG [main] mortbay.log: stopped SessionHandler@5d5a51b1 2016-08-20 21:57:08,495 DEBUG [main] mortbay.log: stopping ErrorHandler@7b5b5bfe 2016-08-20 21:57:08,495 DEBUG [main] mortbay.log: stopped ErrorHandler@7b5b5bfe 2016-08-20 21:57:08,495 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.Context@43acd79e{/logs,file:/opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/log} 2016-08-20 21:57:08,495 DEBUG [main] mortbay.log: stopped ContextHandlerCollection@53d9826f 2016-08-20 21:57:08,495 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.HashSessionIdManager@74834afd 2016-08-20 21:57:08,530 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,530 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:08,530 DEBUG [Task killer for 3819] nodemanager.DefaultContainerExecutor: Sending signal 9 to pid 3819 as user root 2016-08-20 21:57:08,530 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,530 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:43931 of type STATUS_UPDATE 2016-08-20 21:57:08,531 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,531 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:43931 clusterResources: 2016-08-20 21:57:08,531 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:43931 availableResource: 2016-08-20 21:57:08,531 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,531 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,559 DEBUG [Task killer for 3828] nodemanager.DefaultContainerExecutor: Sending signal 9 to pid 3828 as user root 2016-08-20 21:57:08,586 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,587 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:08,587 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,587 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:08,587 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,587 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:08,588 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:08,588 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,588 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,596 DEBUG [main] mortbay.log: stopped org.mortbay.thread.QueuedThreadPool@3b705be7 2016-08-20 21:57:08,596 DEBUG [main] mortbay.log: stopped Server@41ccb3b9 2016-08-20 21:57:08,596 DEBUG [main] service.CompositeService: Stopping service #3: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl in state org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: STARTED 2016-08-20 21:57:08,596 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl entered state STOPPED 2016-08-20 21:57:08,596 INFO [main] ipc.Server: Stopping server on 46239 2016-08-20 21:57:08,596 DEBUG [IPC Server handler 3 on 46239] ipc.Server: IPC Server handler 3 on 46239: exiting 2016-08-20 21:57:08,597 DEBUG [IPC Server handler 0 on 46239] ipc.Server: IPC Server handler 0 on 46239: exiting 2016-08-20 21:57:08,600 DEBUG [IPC Server handler 9 on 46239] ipc.Server: IPC Server handler 9 on 46239: exiting 2016-08-20 21:57:08,600 DEBUG [IPC Server handler 8 on 46239] ipc.Server: IPC Server handler 8 on 46239: exiting 2016-08-20 21:57:08,600 DEBUG [IPC Server handler 13 on 46239] ipc.Server: IPC Server handler 13 on 46239: exiting 2016-08-20 21:57:08,602 DEBUG [IPC Server handler 7 on 46239] ipc.Server: IPC Server handler 7 on 46239: exiting 2016-08-20 21:57:08,602 DEBUG [IPC Server handler 6 on 46239] ipc.Server: IPC Server handler 6 on 46239: exiting 2016-08-20 21:57:08,603 DEBUG [IPC Server handler 5 on 46239] ipc.Server: IPC Server handler 5 on 46239: exiting 2016-08-20 21:57:08,603 DEBUG [IPC Server handler 4 on 46239] ipc.Server: IPC Server handler 4 on 46239: exiting 2016-08-20 21:57:08,603 DEBUG [IPC Server handler 2 on 46239] ipc.Server: IPC Server handler 2 on 46239: exiting 2016-08-20 21:57:08,600 DEBUG [IPC Server handler 10 on 46239] ipc.Server: IPC Server handler 10 on 46239: exiting 2016-08-20 21:57:08,605 DEBUG [IPC Server handler 1 on 46239] ipc.Server: IPC Server handler 1 on 46239: exiting 2016-08-20 21:57:08,600 DEBUG [IPC Server handler 11 on 46239] ipc.Server: IPC Server handler 11 on 46239: exiting 2016-08-20 21:57:08,600 DEBUG [IPC Server handler 12 on 46239] ipc.Server: IPC Server handler 12 on 46239: exiting 2016-08-20 21:57:08,600 DEBUG [IPC Server Responder] ipc.Server: Checking for old call responses. 2016-08-20 21:57:08,600 DEBUG [IPC Server handler 14 on 46239] ipc.Server: IPC Server handler 14 on 46239: exiting 2016-08-20 21:57:08,600 DEBUG [main] service.CompositeService: org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: stopping services, size=7 2016-08-20 21:57:08,599 DEBUG [IPC Server handler 15 on 46239] ipc.Server: IPC Server handler 15 on 46239: exiting 2016-08-20 21:57:08,599 DEBUG [IPC Server handler 16 on 46239] ipc.Server: IPC Server handler 16 on 46239: exiting 2016-08-20 21:57:08,599 DEBUG [IPC Server handler 17 on 46239] ipc.Server: IPC Server handler 17 on 46239: exiting 2016-08-20 21:57:08,599 DEBUG [IPC Server handler 18 on 46239] ipc.Server: IPC Server handler 18 on 46239: exiting 2016-08-20 21:57:08,599 DEBUG [IPC Server handler 19 on 46239] ipc.Server: IPC Server handler 19 on 46239: exiting 2016-08-20 21:57:08,605 DEBUG [main] service.CompositeService: Stopping service #6: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUploadService in state org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUploadService: STARTED 2016-08-20 21:57:08,605 INFO [IPC Server Responder] ipc.Server: Stopping IPC Server Responder 2016-08-20 21:57:08,607 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUploadService entered state STOPPED 2016-08-20 21:57:08,607 DEBUG [main] service.CompositeService: Stopping service #5: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler in state org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: STARTED 2016-08-20 21:57:08,607 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler entered state STOPPED 2016-08-20 21:57:08,603 INFO [IPC Server listener on 46239] ipc.Server: Stopping IPC Server listener on 46239 2016-08-20 21:57:08,607 DEBUG [main] service.CompositeService: Stopping service #4: Service Dispatcher in state Dispatcher: STARTED 2016-08-20 21:57:08,608 DEBUG [main] service.AbstractService: Service: Dispatcher entered state STOPPED 2016-08-20 21:57:08,608 DEBUG [main] service.CompositeService: Stopping service #3: Service containers-monitor in state containers-monitor: STARTED 2016-08-20 21:57:08,608 DEBUG [main] service.AbstractService: Service: containers-monitor entered state STOPPED 2016-08-20 21:57:08,608 DEBUG [main] service.CompositeService: Stopping service #2: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices in state org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: STARTED 2016-08-20 21:57:08,608 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices entered state STOPPED 2016-08-20 21:57:08,608 DEBUG [main] service.CompositeService: Stopping service #1: Service containers-launcher in state containers-launcher: STARTED 2016-08-20 21:57:08,608 DEBUG [main] service.AbstractService: Service: containers-launcher entered state STOPPED 2016-08-20 21:57:08,608 DEBUG [main] service.CompositeService: Stopping service #0: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService in state org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: STARTED 2016-08-20 21:57:08,608 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService entered state STOPPED 2016-08-20 21:57:08,608 INFO [main] ipc.Server: Stopping server on 45915 2016-08-20 21:57:08,609 DEBUG [IPC Server handler 0 on 45915] ipc.Server: IPC Server handler 0 on 45915: exiting 2016-08-20 21:57:08,609 DEBUG [IPC Server handler 1 on 45915] ipc.Server: IPC Server handler 1 on 45915: exiting 2016-08-20 21:57:08,609 DEBUG [IPC Server handler 2 on 45915] ipc.Server: IPC Server handler 2 on 45915: exiting 2016-08-20 21:57:08,609 DEBUG [IPC Server handler 3 on 45915] ipc.Server: IPC Server handler 3 on 45915: exiting 2016-08-20 21:57:08,609 DEBUG [IPC Server handler 4 on 45915] ipc.Server: IPC Server handler 4 on 45915: exiting 2016-08-20 21:57:08,614 DEBUG [main] service.CompositeService: org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: stopping services, size=1 2016-08-20 21:57:08,614 DEBUG [main] service.CompositeService: Stopping service #0: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker in state org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker: STARTED 2016-08-20 21:57:08,614 INFO [IPC Server listener on 45915] ipc.Server: Stopping IPC Server listener on 45915 2016-08-20 21:57:08,614 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker entered state STOPPED 2016-08-20 21:57:08,614 DEBUG [main] service.CompositeService: Stopping service #2: Service org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl in state org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl: STARTED 2016-08-20 21:57:08,614 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl entered state STOPPED 2016-08-20 21:57:08,614 DEBUG [IPC Server Responder] ipc.Server: Checking for old call responses. 2016-08-20 21:57:08,615 INFO [IPC Server Responder] ipc.Server: Stopping IPC Server Responder 2016-08-20 21:57:08,615 INFO [Public Localizer] localizer.ResourceLocalizationService: Public cache exiting 2016-08-20 21:57:08,614 WARN [Node Resource Monitor] nodemanager.NodeResourceMonitorImpl: org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl is interrupted. Exiting. 2016-08-20 21:57:08,615 DEBUG [main] service.CompositeService: Stopping service #1: Service org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService in state org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService: STARTED 2016-08-20 21:57:08,616 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService entered state STOPPED 2016-08-20 21:57:08,616 DEBUG [main] service.CompositeService: org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService: stopping services, size=1 2016-08-20 21:57:08,616 DEBUG [main] service.CompositeService: Stopping service #0: Service org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService in state org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService: STARTED 2016-08-20 21:57:08,616 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService entered state STOPPED 2016-08-20 21:57:08,616 DEBUG [main] service.CompositeService: Stopping service #0: Service org.apache.hadoop.yarn.server.nodemanager.DeletionService in state org.apache.hadoop.yarn.server.nodemanager.DeletionService: STARTED 2016-08-20 21:57:08,616 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.DeletionService entered state STOPPED 2016-08-20 21:57:08,616 DEBUG [main] impl.MetricsSystemImpl: refCount=4 2016-08-20 21:57:08,617 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.recovery.NMNullStateStoreService entered state STOPPED 2016-08-20 21:57:08,617 DEBUG [main] service.CompositeService: Stopping service #2: Service org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper_1 in state org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper_1: STARTED 2016-08-20 21:57:08,617 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper_1 entered state STOPPED 2016-08-20 21:57:08,617 DEBUG [main] service.AbstractService: Service: NodeManager entered state STOPPED 2016-08-20 21:57:08,617 DEBUG [main] service.CompositeService: NodeManager: stopping services, size=8 2016-08-20 21:57:08,618 DEBUG [main] service.CompositeService: Stopping service #7: Service org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl in state org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: STARTED 2016-08-20 21:57:08,618 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl entered state STOPPED 2016-08-20 21:57:08,618 INFO [main] nodemanager.NodeStatusUpdaterImpl: Successfully Unregistered the Node localhost:43931 with ResourceManager. 2016-08-20 21:57:08,618 DEBUG [main] service.CompositeService: Stopping service #6: Service org.apache.hadoop.util.JvmPauseMonitor in state org.apache.hadoop.util.JvmPauseMonitor: STARTED 2016-08-20 21:57:08,618 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.util.JvmPauseMonitor entered state STOPPED 2016-08-20 21:57:08,618 DEBUG [main] service.CompositeService: Stopping service #5: Service Dispatcher in state Dispatcher: STARTED 2016-08-20 21:57:08,618 DEBUG [main] service.AbstractService: Service: Dispatcher entered state STOPPED 2016-08-20 21:57:08,618 DEBUG [main] service.CompositeService: Stopping service #4: Service org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer in state org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer: STARTED 2016-08-20 21:57:08,619 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer entered state STOPPED 2016-08-20 21:57:08,619 DEBUG [main] webapp.WebServer: Stopping webapp 2016-08-20 21:57:08,623 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.nio.SelectChannelConnector$1@75fd65c 2016-08-20 21:57:08,623 DEBUG [main] mortbay.log: stopping org.mortbay.jetty.webapp.WebAppContext@d499c13{/,jar:file:/opt/repo/org/apache/hadoop/hadoop-yarn-common/3.0.0-alpha2-SNAPSHOT/hadoop-yarn-common-3.0.0-alpha2-SNAPSHOT.jar!/webapps/node} 2016-08-20 21:57:08,623 DEBUG [main] mortbay.log: stopping SessionHandler@2ceca2ef 2016-08-20 21:57:08,623 DEBUG [main] mortbay.log: stopping SecurityHandler@42d6c12d 2016-08-20 21:57:08,623 DEBUG [main] mortbay.log: stopping ServletHandler@3b42121d 2016-08-20 21:57:08,624 DEBUG [main] mortbay.log: stopped guice 2016-08-20 21:57:08,624 DEBUG [main] mortbay.log: stopped org.apache.hadoop.security.http.XFrameOptionsFilter 2016-08-20 21:57:08,624 DEBUG [main] mortbay.log: stopped static_user_filter 2016-08-20 21:57:08,624 DEBUG [main] mortbay.log: stopped safety 2016-08-20 21:57:08,624 DEBUG [main] mortbay.log: stopped NoCacheFilter 2016-08-20 21:57:08,624 DEBUG [main] mortbay.log: stopped NoCacheFilter 2016-08-20 21:57:08,624 DEBUG [main] mortbay.log: stopped conf 2016-08-20 21:57:08,624 DEBUG [main] mortbay.log: stopped jmx 2016-08-20 21:57:08,624 DEBUG [main] mortbay.log: stopped logLevel 2016-08-20 21:57:08,624 DEBUG [main] mortbay.log: stopped stacks 2016-08-20 21:57:08,624 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.DefaultServlet$NIOResourceCache@7a2ab862 2016-08-20 21:57:08,624 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.DefaultServlet-1259283097 2016-08-20 21:57:08,624 DEBUG [main] mortbay.log: stopped ServletHandler@3b42121d 2016-08-20 21:57:08,624 DEBUG [main] mortbay.log: stopped SecurityHandler@42d6c12d 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.HashSessionManager@33188612 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: stopped SessionHandler@2ceca2ef 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: stopping ErrorPageErrorHandler@4733f6f5 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: stopped ErrorPageErrorHandler@4733f6f5 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - guice as filter 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - org.apache.hadoop.security.http.XFrameOptionsFilter as filter 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - static_user_filter as filter 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - safety as filter 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - NoCacheFilter as filter 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - NoCacheFilter as filter 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - (F=guice,[/*],[],15) as filterMapping 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - (F=org.apache.hadoop.security.http.XFrameOptionsFilter,[/*],[],15) as filterMapping 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - (F=static_user_filter,[/ws/*],[],15) as filterMapping 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - (F=static_user_filter,[/node/*],[],15) as filterMapping 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - (F=static_user_filter,[/conf],[],15) as filterMapping 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - (F=static_user_filter,[/jmx],[],15) as filterMapping 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - (F=static_user_filter,[/logLevel],[],15) as filterMapping 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - (F=static_user_filter,[/stacks],[],15) as filterMapping 2016-08-20 21:57:08,625 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - (F=static_user_filter,[*.html, *.jsp],[],15) as filterMapping 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - (F=safety,[/*],[],15) as filterMapping 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - (F=NoCacheFilter,[/*],[],15) as filterMapping 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - (F=NoCacheFilter,[/*],[],15) as filterMapping 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: filterNameMap=null 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: pathFilters=null 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: servletFilterMap=null 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: servletPathMap={/jmx=jmx, /conf=conf, /stacks=stacks, /logLevel=logLevel, /=org.mortbay.jetty.servlet.DefaultServlet-1259283097} 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: servletNameMap={logLevel=logLevel, jmx=jmx, stacks=stacks, conf=conf, org.mortbay.jetty.servlet.DefaultServlet-1259283097=org.mortbay.jetty.servlet.DefaultServlet-1259283097} 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - conf as servlet 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - jmx as servlet 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - logLevel as servlet 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - stacks as servlet 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - org.mortbay.jetty.servlet.DefaultServlet-1259283097 as servlet 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - (S=conf,[/conf]) as servletMapping 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - (S=jmx,[/jmx]) as servletMapping 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - (S=logLevel,[/logLevel]) as servletMapping 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - (S=stacks,[/stacks]) as servletMapping 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: Container ServletHandler@3b42121d - (S=org.mortbay.jetty.servlet.DefaultServlet-1259283097,[/]) as servletMapping 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: filterNameMap=null 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: pathFilters=null 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: servletFilterMap=null 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: servletPathMap=null 2016-08-20 21:57:08,626 DEBUG [main] mortbay.log: servletNameMap=null 2016-08-20 21:57:08,627 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.webapp.WebAppContext@d499c13{/,jar:file:/opt/repo/org/apache/hadoop/hadoop-yarn-common/3.0.0-alpha2-SNAPSHOT/hadoop-yarn-common-3.0.0-alpha2-SNAPSHOT.jar!/webapps/node} 2016-08-20 21:57:08,627 INFO [main] mortbay.log: Stopped SelectChannelConnector@localhost:0 2016-08-20 21:57:08,627 DEBUG [main] mortbay.log: stopped SelectChannelConnector@localhost:0 2016-08-20 21:57:08,628 DEBUG [main] mortbay.log: stopping Server@71b6172c 2016-08-20 21:57:08,628 DEBUG [main] mortbay.log: stopping ContextHandlerCollection@58aa10f4 2016-08-20 21:57:08,628 DEBUG [main] mortbay.log: stopping org.mortbay.jetty.servlet.Context@12704e15{/static,jar:file:/opt/repo/org/apache/hadoop/hadoop-yarn-common/3.0.0-alpha2-SNAPSHOT/hadoop-yarn-common-3.0.0-alpha2-SNAPSHOT.jar!/webapps/static} 2016-08-20 21:57:08,628 DEBUG [main] mortbay.log: stopping SessionHandler@4fb56bea 2016-08-20 21:57:08,628 DEBUG [main] mortbay.log: stopping ServletHandler@5e5beb8a 2016-08-20 21:57:08,628 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.DefaultServlet-91831175 2016-08-20 21:57:08,628 DEBUG [main] mortbay.log: stopped ServletHandler@5e5beb8a 2016-08-20 21:57:08,628 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.HashSessionManager@1f95881a 2016-08-20 21:57:08,628 DEBUG [main] mortbay.log: stopped SessionHandler@4fb56bea 2016-08-20 21:57:08,628 DEBUG [main] mortbay.log: stopping ErrorHandler@17b016ac 2016-08-20 21:57:08,628 DEBUG [main] mortbay.log: stopped ErrorHandler@17b016ac 2016-08-20 21:57:08,628 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.Context@12704e15{/static,jar:file:/opt/repo/org/apache/hadoop/hadoop-yarn-common/3.0.0-alpha2-SNAPSHOT/hadoop-yarn-common-3.0.0-alpha2-SNAPSHOT.jar!/webapps/static} 2016-08-20 21:57:08,628 DEBUG [main] mortbay.log: stopping org.mortbay.jetty.servlet.Context@70730db{/logs,file:/opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/log} 2016-08-20 21:57:08,629 DEBUG [main] mortbay.log: stopping SessionHandler@733ec58b 2016-08-20 21:57:08,629 DEBUG [main] mortbay.log: stopping ServletHandler@723877dd 2016-08-20 21:57:08,629 DEBUG [main] mortbay.log: stopped org.apache.hadoop.http.AdminAuthorizedServlet-892262157 2016-08-20 21:57:08,629 DEBUG [main] mortbay.log: stopped ServletHandler@723877dd 2016-08-20 21:57:08,629 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.HashSessionManager@415ef4d8 2016-08-20 21:57:08,630 DEBUG [main] mortbay.log: stopped SessionHandler@733ec58b 2016-08-20 21:57:08,630 DEBUG [main] mortbay.log: stopping ErrorHandler@56cc9f29 2016-08-20 21:57:08,630 DEBUG [main] mortbay.log: stopped ErrorHandler@56cc9f29 2016-08-20 21:57:08,631 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.Context@70730db{/logs,file:/opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/log} 2016-08-20 21:57:08,631 DEBUG [main] mortbay.log: stopped ContextHandlerCollection@58aa10f4 2016-08-20 21:57:08,631 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.HashSessionIdManager@535b1ae6 2016-08-20 21:57:08,681 DEBUG [Task killer for 3836] nodemanager.DefaultContainerExecutor: Sending signal 9 to pid 3836 as user root 2016-08-20 21:57:08,687 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,688 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:08,688 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,688 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:08,688 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,689 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:08,689 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:08,689 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,689 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,733 DEBUG [main] mortbay.log: stopped org.mortbay.thread.QueuedThreadPool@15405bd6 2016-08-20 21:57:08,733 DEBUG [main] mortbay.log: stopped Server@71b6172c 2016-08-20 21:57:08,734 DEBUG [main] service.CompositeService: Stopping service #3: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl in state org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: STARTED 2016-08-20 21:57:08,734 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl entered state STOPPED 2016-08-20 21:57:08,734 INFO [main] containermanager.ContainerManagerImpl: Applications still running : [application_1471710419543_0001] 2016-08-20 21:57:08,734 INFO [main] containermanager.ContainerManagerImpl: Waiting for Applications to be Finished 2016-08-20 21:57:08,734 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationFinishEvent.EventType: FINISH_APPLICATION 2016-08-20 21:57:08,734 DEBUG [AsyncDispatcher event handler] application.ApplicationImpl: Processing application_1471710419543_0001 of type FINISH_APPLICATION 2016-08-20 21:57:08,788 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,789 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:08,789 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,790 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:08,791 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,791 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:08,791 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:08,791 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,791 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,889 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,889 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:08,890 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,890 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:08,890 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,891 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:08,891 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:08,891 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,891 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:08,990 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:08,991 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:08,991 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:08,991 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:08,992 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:08,992 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:08,992 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:08,992 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:08,992 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:09,092 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:09,092 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:09,093 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:09,093 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:09,093 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:09,093 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:09,093 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:09,093 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:09,093 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:09,193 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:09,193 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:09,194 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:09,194 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:09,194 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:09,194 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:09,195 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:09,195 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:09,195 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:09,294 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:09,294 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:09,295 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:09,295 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:09,295 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:09,296 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:09,296 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:09,296 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:09,296 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:09,351 DEBUG [LogDeleter #0] concurrent.HadoopScheduledThreadPoolExecutor: beforeExecute in thread: LogDeleter #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:09,351 DEBUG [LogDeleter #0] security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:327) 2016-08-20 21:57:09,353 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationEvent.EventType: APPLICATION_LOG_HANDLING_FINISHED 2016-08-20 21:57:09,354 DEBUG [DeletionService #0] concurrent.HadoopScheduledThreadPoolExecutor: beforeExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:09,354 DEBUG [DeletionService #0] nodemanager.DeletionService: FileDeletionTask : user : root subDir : null baseDir : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-logDir-nm-1_0/application_1471710419543_0001, 2016-08-20 21:57:09,354 DEBUG [DeletionService #0] nodemanager.DeletionService: Deleting path: [null] as user: [root] 2016-08-20 21:57:09,354 INFO [DeletionService #0] nodemanager.DefaultContainerExecutor: Deleting path : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-logDir-nm-1_0/application_1471710419543_0001 2016-08-20 21:57:09,355 DEBUG [DeletionService #0] concurrent.ExecutorHelper: afterExecute in thread: DeletionService #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:09,353 DEBUG [LogDeleter #0] concurrent.ExecutorHelper: afterExecute in thread: LogDeleter #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:09,354 DEBUG [AsyncDispatcher event handler] application.ApplicationImpl: Processing application_1471710419543_0001 of type APPLICATION_LOG_HANDLING_FINISHED 2016-08-20 21:57:09,395 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:09,396 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:09,396 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:09,397 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:09,397 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:09,397 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:09,397 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:09,398 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:09,398 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:09,488 DEBUG [LogDeleter #0] concurrent.HadoopScheduledThreadPoolExecutor: beforeExecute in thread: LogDeleter #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:09,488 DEBUG [LogDeleter #0] security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:327) 2016-08-20 21:57:09,490 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationEvent.EventType: APPLICATION_LOG_HANDLING_FINISHED 2016-08-20 21:57:09,490 DEBUG [DeletionService #3] concurrent.HadoopScheduledThreadPoolExecutor: beforeExecute in thread: DeletionService #3, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:09,490 DEBUG [LogDeleter #0] concurrent.ExecutorHelper: afterExecute in thread: LogDeleter #0, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:09,490 DEBUG [DeletionService #3] nodemanager.DeletionService: FileDeletionTask : user : root subDir : null baseDir : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-logDir-nm-0_0/application_1471710419543_0001, 2016-08-20 21:57:09,490 DEBUG [DeletionService #3] nodemanager.DeletionService: Deleting path: [null] as user: [root] 2016-08-20 21:57:09,490 DEBUG [AsyncDispatcher event handler] application.ApplicationImpl: Processing application_1471710419543_0001 of type APPLICATION_LOG_HANDLING_FINISHED 2016-08-20 21:57:09,490 INFO [DeletionService #3] nodemanager.DefaultContainerExecutor: Deleting path : /opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient/org.apache.hadoop.yarn.client.api.impl.TestAMRMClient-logDir-nm-0_0/application_1471710419543_0001 2016-08-20 21:57:09,491 DEBUG [DeletionService #3] concurrent.ExecutorHelper: afterExecute in thread: DeletionService #3, runnable type: java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask 2016-08-20 21:57:09,497 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:09,497 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:09,498 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:09,498 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:09,498 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:09,498 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:09,498 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:09,498 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:09,499 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:09,598 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:09,598 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:09,599 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:09,599 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:09,599 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:09,599 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:09,599 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:09,600 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:09,600 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:09,644 DEBUG [IPC Server idle connection scanner for port 45325] ipc.Server: IPC Server idle connection scanner for port 45325: task running 2016-08-20 21:57:09,698 DEBUG [IPC Server idle connection scanner for port 37347] ipc.Server: IPC Server idle connection scanner for port 37347: task running 2016-08-20 21:57:09,699 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Node's health-status : true, 2016-08-20 21:57:09,699 DEBUG [Node Status Updater] nodemanager.NodeStatusUpdaterImpl: Sending out 0 container statuses: [] 2016-08-20 21:57:09,700 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType: STATUS_UPDATE 2016-08-20 21:57:09,702 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl: Processing localhost:36489 of type STATUS_UPDATE 2016-08-20 21:57:09,702 DEBUG [AsyncDispatcher event handler] event.AsyncDispatcher: Dispatching the event org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType: NODE_UPDATE 2016-08-20 21:57:09,702 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: nodeUpdate: localhost:36489 clusterResources: 2016-08-20 21:57:09,702 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Node being looked for scheduling localhost:36489 availableResource: 2016-08-20 21:57:09,702 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.CapacityScheduler: Trying to schedule on node: localhost, available: 2016-08-20 21:57:09,702 DEBUG [SchedulerEventDispatcher:Event Processor] capacity.ParentQueue: Skip this queue=root, because it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY node-partition= 2016-08-20 21:57:09,734 INFO [main] containermanager.ContainerManagerImpl: All applications in FINISHED state 2016-08-20 21:57:09,734 INFO [main] ipc.Server: Stopping server on 43931 2016-08-20 21:57:09,735 DEBUG [IPC Server handler 0 on 43931] ipc.Server: IPC Server handler 0 on 43931: exiting 2016-08-20 21:57:09,735 DEBUG [IPC Server handler 5 on 43931] ipc.Server: IPC Server handler 5 on 43931: exiting 2016-08-20 21:57:09,735 DEBUG [IPC Server handler 8 on 43931] ipc.Server: IPC Server handler 8 on 43931: exiting 2016-08-20 21:57:09,735 DEBUG [IPC Server handler 12 on 43931] ipc.Server: IPC Server handler 12 on 43931: exiting 2016-08-20 21:57:09,736 DEBUG [IPC Server handler 2 on 43931] ipc.Server: IPC Server handler 2 on 43931: exiting 2016-08-20 21:57:09,735 DEBUG [IPC Server handler 14 on 43931] ipc.Server: IPC Server handler 14 on 43931: exiting 2016-08-20 21:57:09,738 DEBUG [IPC Server handler 9 on 43931] ipc.Server: IPC Server handler 9 on 43931: exiting 2016-08-20 21:57:09,739 DEBUG [IPC Server handler 10 on 43931] ipc.Server: IPC Server handler 10 on 43931: exiting 2016-08-20 21:57:09,739 DEBUG [IPC Server handler 19 on 43931] ipc.Server: IPC Server handler 19 on 43931: exiting 2016-08-20 21:57:09,735 DEBUG [IPC Server handler 13 on 43931] ipc.Server: IPC Server handler 13 on 43931: exiting 2016-08-20 21:57:09,735 DEBUG [IPC Server handler 16 on 43931] ipc.Server: IPC Server handler 16 on 43931: exiting 2016-08-20 21:57:09,740 DEBUG [IPC Server Responder] ipc.Server: Checking for old call responses. 2016-08-20 21:57:09,739 INFO [IPC Server listener on 43931] ipc.Server: Stopping IPC Server listener on 43931 2016-08-20 21:57:09,739 DEBUG [IPC Server handler 17 on 43931] ipc.Server: IPC Server handler 17 on 43931: exiting 2016-08-20 21:57:09,739 DEBUG [IPC Server handler 3 on 43931] ipc.Server: IPC Server handler 3 on 43931: exiting 2016-08-20 21:57:09,739 DEBUG [main] service.CompositeService: org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: stopping services, size=7 2016-08-20 21:57:09,739 DEBUG [IPC Server handler 6 on 43931] ipc.Server: IPC Server handler 6 on 43931: exiting 2016-08-20 21:57:09,739 DEBUG [IPC Server handler 4 on 43931] ipc.Server: IPC Server handler 4 on 43931: exiting 2016-08-20 21:57:09,738 DEBUG [IPC Server handler 15 on 43931] ipc.Server: IPC Server handler 15 on 43931: exiting 2016-08-20 21:57:09,739 DEBUG [IPC Server handler 1 on 43931] ipc.Server: IPC Server handler 1 on 43931: exiting 2016-08-20 21:57:09,738 DEBUG [IPC Server handler 11 on 43931] ipc.Server: IPC Server handler 11 on 43931: exiting 2016-08-20 21:57:09,738 DEBUG [IPC Server handler 18 on 43931] ipc.Server: IPC Server handler 18 on 43931: exiting 2016-08-20 21:57:09,738 DEBUG [IPC Server handler 7 on 43931] ipc.Server: IPC Server handler 7 on 43931: exiting 2016-08-20 21:57:09,741 DEBUG [IPC Server listener on 43931] ipc.Server: IPC Server listener on 43931: disconnecting client 127.0.0.1:43416. Number of active connections: 1 2016-08-20 21:57:09,741 DEBUG [main] service.CompositeService: Stopping service #6: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUploadService in state org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUploadService: STARTED 2016-08-20 21:57:09,740 INFO [IPC Server Responder] ipc.Server: Stopping IPC Server Responder 2016-08-20 21:57:09,744 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUploadService entered state STOPPED 2016-08-20 21:57:09,744 DEBUG [IPC Server listener on 43931] ipc.Server: IPC Server listener on 43931: disconnecting client 127.0.0.1:43400. Number of active connections: 0 2016-08-20 21:57:09,744 DEBUG [main] service.CompositeService: Stopping service #5: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler in state org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: STARTED 2016-08-20 21:57:09,744 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler entered state STOPPED 2016-08-20 21:57:09,745 DEBUG [main] service.CompositeService: Stopping service #4: Service Dispatcher in state Dispatcher: STARTED 2016-08-20 21:57:09,746 DEBUG [main] service.AbstractService: Service: Dispatcher entered state STOPPED 2016-08-20 21:57:09,747 DEBUG [main] service.CompositeService: Stopping service #3: Service containers-monitor in state containers-monitor: STARTED 2016-08-20 21:57:09,747 DEBUG [main] service.AbstractService: Service: containers-monitor entered state STOPPED 2016-08-20 21:57:09,747 DEBUG [main] service.CompositeService: Stopping service #2: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices in state org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: STARTED 2016-08-20 21:57:09,747 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices entered state STOPPED 2016-08-20 21:57:09,748 DEBUG [main] service.CompositeService: Stopping service #1: Service containers-launcher in state containers-launcher: STARTED 2016-08-20 21:57:09,748 DEBUG [main] service.AbstractService: Service: containers-launcher entered state STOPPED 2016-08-20 21:57:09,748 DEBUG [main] service.CompositeService: Stopping service #0: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService in state org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: STARTED 2016-08-20 21:57:09,748 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService entered state STOPPED 2016-08-20 21:57:09,748 INFO [main] ipc.Server: Stopping server on 33955 2016-08-20 21:57:09,749 DEBUG [IPC Server handler 0 on 33955] ipc.Server: IPC Server handler 0 on 33955: exiting 2016-08-20 21:57:09,751 DEBUG [IPC Server handler 4 on 33955] ipc.Server: IPC Server handler 4 on 33955: exiting 2016-08-20 21:57:09,751 DEBUG [IPC Server handler 1 on 33955] ipc.Server: IPC Server handler 1 on 33955: exiting 2016-08-20 21:57:09,750 DEBUG [IPC Server handler 3 on 33955] ipc.Server: IPC Server handler 3 on 33955: exiting 2016-08-20 21:57:09,750 DEBUG [IPC Server handler 2 on 33955] ipc.Server: IPC Server handler 2 on 33955: exiting 2016-08-20 21:57:09,752 INFO [IPC Server listener on 33955] ipc.Server: Stopping IPC Server listener on 33955 2016-08-20 21:57:09,753 DEBUG [IPC Server Responder] ipc.Server: Checking for old call responses. 2016-08-20 21:57:09,754 INFO [IPC Server Responder] ipc.Server: Stopping IPC Server Responder 2016-08-20 21:57:09,754 DEBUG [main] service.CompositeService: org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: stopping services, size=1 2016-08-20 21:57:09,754 DEBUG [main] service.CompositeService: Stopping service #0: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker in state org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker: STARTED 2016-08-20 21:57:09,755 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker entered state STOPPED 2016-08-20 21:57:09,755 DEBUG [main] service.CompositeService: Stopping service #2: Service org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl in state org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl: STARTED 2016-08-20 21:57:09,755 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl entered state STOPPED 2016-08-20 21:57:09,755 INFO [Public Localizer] localizer.ResourceLocalizationService: Public cache exiting 2016-08-20 21:57:09,755 WARN [Node Resource Monitor] nodemanager.NodeResourceMonitorImpl: org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl is interrupted. Exiting. 2016-08-20 21:57:09,755 DEBUG [main] service.CompositeService: Stopping service #1: Service org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService in state org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService: STARTED 2016-08-20 21:57:09,755 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService entered state STOPPED 2016-08-20 21:57:09,756 DEBUG [main] service.CompositeService: org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService: stopping services, size=1 2016-08-20 21:57:09,756 DEBUG [main] service.CompositeService: Stopping service #0: Service org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService in state org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService: STARTED 2016-08-20 21:57:09,756 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService entered state STOPPED 2016-08-20 21:57:09,757 DEBUG [main] service.CompositeService: Stopping service #0: Service org.apache.hadoop.yarn.server.nodemanager.DeletionService in state org.apache.hadoop.yarn.server.nodemanager.DeletionService: STARTED 2016-08-20 21:57:09,757 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.DeletionService entered state STOPPED 2016-08-20 21:57:09,758 DEBUG [main] impl.MetricsSystemImpl: refCount=3 2016-08-20 21:57:09,758 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.recovery.NMNullStateStoreService entered state STOPPED 2016-08-20 21:57:09,758 DEBUG [main] service.CompositeService: Stopping service #1: Service org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper_0 in state org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper_0: STARTED 2016-08-20 21:57:09,759 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper_0 entered state STOPPED 2016-08-20 21:57:09,759 DEBUG [main] service.AbstractService: Service: NodeManager entered state STOPPED 2016-08-20 21:57:09,759 DEBUG [main] service.CompositeService: NodeManager: stopping services, size=8 2016-08-20 21:57:09,759 DEBUG [main] service.CompositeService: Stopping service #7: Service org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl in state org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: STARTED 2016-08-20 21:57:09,759 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl entered state STOPPED 2016-08-20 21:57:09,759 INFO [main] nodemanager.NodeStatusUpdaterImpl: Successfully Unregistered the Node localhost:36489 with ResourceManager. 2016-08-20 21:57:09,759 DEBUG [main] service.CompositeService: Stopping service #6: Service org.apache.hadoop.util.JvmPauseMonitor in state org.apache.hadoop.util.JvmPauseMonitor: STARTED 2016-08-20 21:57:09,759 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.util.JvmPauseMonitor entered state STOPPED 2016-08-20 21:57:09,759 DEBUG [main] service.CompositeService: Stopping service #5: Service Dispatcher in state Dispatcher: STARTED 2016-08-20 21:57:09,760 DEBUG [main] service.AbstractService: Service: Dispatcher entered state STOPPED 2016-08-20 21:57:09,760 DEBUG [main] service.CompositeService: Stopping service #4: Service org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer in state org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer: STARTED 2016-08-20 21:57:09,760 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer entered state STOPPED 2016-08-20 21:57:09,760 DEBUG [main] webapp.WebServer: Stopping webapp 2016-08-20 21:57:09,763 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.nio.SelectChannelConnector$1@626c19cf 2016-08-20 21:57:09,763 DEBUG [main] mortbay.log: stopping org.mortbay.jetty.webapp.WebAppContext@54a2d96e{/,jar:file:/opt/repo/org/apache/hadoop/hadoop-yarn-common/3.0.0-alpha2-SNAPSHOT/hadoop-yarn-common-3.0.0-alpha2-SNAPSHOT.jar!/webapps/node} 2016-08-20 21:57:09,764 DEBUG [main] mortbay.log: stopping SessionHandler@66a53104 2016-08-20 21:57:09,765 DEBUG [main] mortbay.log: stopping SecurityHandler@6d229b1c 2016-08-20 21:57:09,765 DEBUG [main] mortbay.log: stopping ServletHandler@6f825516 2016-08-20 21:57:09,765 DEBUG [main] mortbay.log: stopped guice 2016-08-20 21:57:09,765 DEBUG [main] mortbay.log: stopped org.apache.hadoop.security.http.XFrameOptionsFilter 2016-08-20 21:57:09,765 DEBUG [main] mortbay.log: stopped static_user_filter 2016-08-20 21:57:09,765 DEBUG [main] mortbay.log: stopped safety 2016-08-20 21:57:09,765 DEBUG [main] mortbay.log: stopped NoCacheFilter 2016-08-20 21:57:09,765 DEBUG [main] mortbay.log: stopped NoCacheFilter 2016-08-20 21:57:09,765 DEBUG [main] mortbay.log: stopped conf 2016-08-20 21:57:09,765 DEBUG [main] mortbay.log: stopped jmx 2016-08-20 21:57:09,765 DEBUG [main] mortbay.log: stopped logLevel 2016-08-20 21:57:09,765 DEBUG [main] mortbay.log: stopped stacks 2016-08-20 21:57:09,765 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.DefaultServlet$NIOResourceCache@2da99821 2016-08-20 21:57:09,765 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.DefaultServlet-830381116 2016-08-20 21:57:09,765 DEBUG [main] mortbay.log: stopped ServletHandler@6f825516 2016-08-20 21:57:09,765 DEBUG [main] mortbay.log: stopped SecurityHandler@6d229b1c 2016-08-20 21:57:09,765 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.HashSessionManager@62cba181 2016-08-20 21:57:09,766 DEBUG [main] mortbay.log: stopped SessionHandler@66a53104 2016-08-20 21:57:09,766 DEBUG [main] mortbay.log: stopping ErrorPageErrorHandler@1b482cbf 2016-08-20 21:57:09,766 DEBUG [main] mortbay.log: stopped ErrorPageErrorHandler@1b482cbf 2016-08-20 21:57:09,766 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - guice as filter 2016-08-20 21:57:09,766 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - org.apache.hadoop.security.http.XFrameOptionsFilter as filter 2016-08-20 21:57:09,766 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - static_user_filter as filter 2016-08-20 21:57:09,766 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - safety as filter 2016-08-20 21:57:09,766 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - NoCacheFilter as filter 2016-08-20 21:57:09,766 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - NoCacheFilter as filter 2016-08-20 21:57:09,766 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - (F=guice,[/*],[],15) as filterMapping 2016-08-20 21:57:09,766 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - (F=org.apache.hadoop.security.http.XFrameOptionsFilter,[/*],[],15) as filterMapping 2016-08-20 21:57:09,766 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - (F=static_user_filter,[/ws/*],[],15) as filterMapping 2016-08-20 21:57:09,766 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - (F=static_user_filter,[/node/*],[],15) as filterMapping 2016-08-20 21:57:09,766 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - (F=static_user_filter,[/conf],[],15) as filterMapping 2016-08-20 21:57:09,766 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - (F=static_user_filter,[/jmx],[],15) as filterMapping 2016-08-20 21:57:09,767 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - (F=static_user_filter,[/logLevel],[],15) as filterMapping 2016-08-20 21:57:09,767 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - (F=static_user_filter,[/stacks],[],15) as filterMapping 2016-08-20 21:57:09,767 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - (F=static_user_filter,[*.html, *.jsp],[],15) as filterMapping 2016-08-20 21:57:09,767 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - (F=safety,[/*],[],15) as filterMapping 2016-08-20 21:57:09,767 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - (F=NoCacheFilter,[/*],[],15) as filterMapping 2016-08-20 21:57:09,767 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - (F=NoCacheFilter,[/*],[],15) as filterMapping 2016-08-20 21:57:09,767 DEBUG [main] mortbay.log: filterNameMap=null 2016-08-20 21:57:09,767 DEBUG [main] mortbay.log: pathFilters=null 2016-08-20 21:57:09,767 DEBUG [main] mortbay.log: servletFilterMap=null 2016-08-20 21:57:09,767 DEBUG [main] mortbay.log: servletPathMap={/jmx=jmx, /conf=conf, /stacks=stacks, /logLevel=logLevel, /=org.mortbay.jetty.servlet.DefaultServlet-830381116} 2016-08-20 21:57:09,767 DEBUG [main] mortbay.log: servletNameMap={logLevel=logLevel, jmx=jmx, stacks=stacks, org.mortbay.jetty.servlet.DefaultServlet-830381116=org.mortbay.jetty.servlet.DefaultServlet-830381116, conf=conf} 2016-08-20 21:57:09,767 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - conf as servlet 2016-08-20 21:57:09,767 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - jmx as servlet 2016-08-20 21:57:09,768 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - logLevel as servlet 2016-08-20 21:57:09,768 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - stacks as servlet 2016-08-20 21:57:09,768 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - org.mortbay.jetty.servlet.DefaultServlet-830381116 as servlet 2016-08-20 21:57:09,768 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - (S=conf,[/conf]) as servletMapping 2016-08-20 21:57:09,768 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - (S=jmx,[/jmx]) as servletMapping 2016-08-20 21:57:09,768 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - (S=logLevel,[/logLevel]) as servletMapping 2016-08-20 21:57:09,768 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - (S=stacks,[/stacks]) as servletMapping 2016-08-20 21:57:09,768 DEBUG [main] mortbay.log: Container ServletHandler@6f825516 - (S=org.mortbay.jetty.servlet.DefaultServlet-830381116,[/]) as servletMapping 2016-08-20 21:57:09,768 DEBUG [main] mortbay.log: filterNameMap=null 2016-08-20 21:57:09,768 DEBUG [main] mortbay.log: pathFilters=null 2016-08-20 21:57:09,768 DEBUG [main] mortbay.log: servletFilterMap=null 2016-08-20 21:57:09,768 DEBUG [main] mortbay.log: servletPathMap=null 2016-08-20 21:57:09,769 DEBUG [main] mortbay.log: servletNameMap=null 2016-08-20 21:57:09,772 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.webapp.WebAppContext@54a2d96e{/,jar:file:/opt/repo/org/apache/hadoop/hadoop-yarn-common/3.0.0-alpha2-SNAPSHOT/hadoop-yarn-common-3.0.0-alpha2-SNAPSHOT.jar!/webapps/node} 2016-08-20 21:57:09,772 INFO [main] mortbay.log: Stopped SelectChannelConnector@localhost:0 2016-08-20 21:57:09,773 DEBUG [main] mortbay.log: stopped SelectChannelConnector@localhost:0 2016-08-20 21:57:09,773 DEBUG [main] mortbay.log: stopping Server@4c1d59cd 2016-08-20 21:57:09,773 DEBUG [main] mortbay.log: stopping ContextHandlerCollection@76cf841 2016-08-20 21:57:09,773 DEBUG [main] mortbay.log: stopping org.mortbay.jetty.servlet.Context@6f3e19b3{/static,jar:file:/opt/repo/org/apache/hadoop/hadoop-yarn-common/3.0.0-alpha2-SNAPSHOT/hadoop-yarn-common-3.0.0-alpha2-SNAPSHOT.jar!/webapps/static} 2016-08-20 21:57:09,773 DEBUG [main] mortbay.log: stopping SessionHandler@297c9a9b 2016-08-20 21:57:09,773 DEBUG [main] mortbay.log: stopping ServletHandler@20999517 2016-08-20 21:57:09,773 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.DefaultServlet-1100288091 2016-08-20 21:57:09,773 DEBUG [main] mortbay.log: stopped ServletHandler@20999517 2016-08-20 21:57:09,775 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.HashSessionManager@6ec63f8 2016-08-20 21:57:09,775 DEBUG [main] mortbay.log: stopped SessionHandler@297c9a9b 2016-08-20 21:57:09,775 DEBUG [main] mortbay.log: stopping ErrorHandler@66223d94 2016-08-20 21:57:09,775 DEBUG [main] mortbay.log: stopped ErrorHandler@66223d94 2016-08-20 21:57:09,775 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.Context@6f3e19b3{/static,jar:file:/opt/repo/org/apache/hadoop/hadoop-yarn-common/3.0.0-alpha2-SNAPSHOT/hadoop-yarn-common-3.0.0-alpha2-SNAPSHOT.jar!/webapps/static} 2016-08-20 21:57:09,776 DEBUG [main] mortbay.log: stopping org.mortbay.jetty.servlet.Context@2dd2e270{/logs,file:/opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/log} 2016-08-20 21:57:09,776 DEBUG [main] mortbay.log: stopping SessionHandler@2bc7db89 2016-08-20 21:57:09,776 DEBUG [main] mortbay.log: stopping ServletHandler@479ac2cb 2016-08-20 21:57:09,776 DEBUG [main] mortbay.log: stopped org.apache.hadoop.http.AdminAuthorizedServlet-1753746465 2016-08-20 21:57:09,776 DEBUG [main] mortbay.log: stopped ServletHandler@479ac2cb 2016-08-20 21:57:09,777 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.HashSessionManager@220c9a63 2016-08-20 21:57:09,778 DEBUG [main] mortbay.log: stopped SessionHandler@2bc7db89 2016-08-20 21:57:09,778 DEBUG [main] mortbay.log: stopping ErrorHandler@55b5cd2b 2016-08-20 21:57:09,778 DEBUG [main] mortbay.log: stopped ErrorHandler@55b5cd2b 2016-08-20 21:57:09,778 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.Context@2dd2e270{/logs,file:/opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/log} 2016-08-20 21:57:09,778 DEBUG [main] mortbay.log: stopped ContextHandlerCollection@76cf841 2016-08-20 21:57:09,778 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.HashSessionIdManager@40bb4f87 2016-08-20 21:57:09,796 DEBUG [IPC Server idle connection scanner for port 44193] ipc.Server: IPC Server idle connection scanner for port 44193: task running 2016-08-20 21:57:09,878 DEBUG [main] mortbay.log: stopped org.mortbay.thread.QueuedThreadPool@31a3f4de 2016-08-20 21:57:09,879 DEBUG [main] mortbay.log: stopped Server@4c1d59cd 2016-08-20 21:57:09,879 DEBUG [main] service.CompositeService: Stopping service #3: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl in state org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: STARTED 2016-08-20 21:57:09,879 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl entered state STOPPED 2016-08-20 21:57:09,879 INFO [main] ipc.Server: Stopping server on 36489 2016-08-20 21:57:09,879 DEBUG [IPC Server handler 0 on 36489] ipc.Server: IPC Server handler 0 on 36489: exiting 2016-08-20 21:57:09,881 DEBUG [IPC Server handler 1 on 36489] ipc.Server: IPC Server handler 1 on 36489: exiting 2016-08-20 21:57:09,881 DEBUG [IPC Server handler 2 on 36489] ipc.Server: IPC Server handler 2 on 36489: exiting 2016-08-20 21:57:09,882 DEBUG [IPC Server handler 6 on 36489] ipc.Server: IPC Server handler 6 on 36489: exiting 2016-08-20 21:57:09,882 DEBUG [IPC Server handler 10 on 36489] ipc.Server: IPC Server handler 10 on 36489: exiting 2016-08-20 21:57:09,882 DEBUG [IPC Server handler 8 on 36489] ipc.Server: IPC Server handler 8 on 36489: exiting 2016-08-20 21:57:09,883 DEBUG [IPC Server handler 11 on 36489] ipc.Server: IPC Server handler 11 on 36489: exiting 2016-08-20 21:57:09,882 DEBUG [IPC Server handler 4 on 36489] ipc.Server: IPC Server handler 4 on 36489: exiting 2016-08-20 21:57:09,884 DEBUG [IPC Server handler 17 on 36489] ipc.Server: IPC Server handler 17 on 36489: exiting 2016-08-20 21:57:09,882 DEBUG [IPC Server handler 9 on 36489] ipc.Server: IPC Server handler 9 on 36489: exiting 2016-08-20 21:57:09,882 DEBUG [IPC Server handler 18 on 36489] ipc.Server: IPC Server handler 18 on 36489: exiting 2016-08-20 21:57:09,882 DEBUG [IPC Server handler 16 on 36489] ipc.Server: IPC Server handler 16 on 36489: exiting 2016-08-20 21:57:09,882 DEBUG [IPC Server handler 13 on 36489] ipc.Server: IPC Server handler 13 on 36489: exiting 2016-08-20 21:57:09,885 DEBUG [IPC Server handler 7 on 36489] ipc.Server: IPC Server handler 7 on 36489: exiting 2016-08-20 21:57:09,885 DEBUG [IPC Server handler 5 on 36489] ipc.Server: IPC Server handler 5 on 36489: exiting 2016-08-20 21:57:09,885 DEBUG [main] service.CompositeService: org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: stopping services, size=7 2016-08-20 21:57:09,884 DEBUG [IPC Server handler 3 on 36489] ipc.Server: IPC Server handler 3 on 36489: exiting 2016-08-20 21:57:09,884 DEBUG [IPC Server handler 19 on 36489] ipc.Server: IPC Server handler 19 on 36489: exiting 2016-08-20 21:57:09,884 DEBUG [IPC Server Responder] ipc.Server: Checking for old call responses. 2016-08-20 21:57:09,886 INFO [IPC Server Responder] ipc.Server: Stopping IPC Server Responder 2016-08-20 21:57:09,884 DEBUG [IPC Server handler 15 on 36489] ipc.Server: IPC Server handler 15 on 36489: exiting 2016-08-20 21:57:09,883 DEBUG [IPC Server handler 14 on 36489] ipc.Server: IPC Server handler 14 on 36489: exiting 2016-08-20 21:57:09,883 INFO [IPC Server listener on 36489] ipc.Server: Stopping IPC Server listener on 36489 2016-08-20 21:57:09,883 DEBUG [IPC Server handler 12 on 36489] ipc.Server: IPC Server handler 12 on 36489: exiting 2016-08-20 21:57:09,886 DEBUG [main] service.CompositeService: Stopping service #6: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUploadService in state org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUploadService: STARTED 2016-08-20 21:57:09,887 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUploadService entered state STOPPED 2016-08-20 21:57:09,887 DEBUG [main] service.CompositeService: Stopping service #5: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler in state org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: STARTED 2016-08-20 21:57:09,888 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler entered state STOPPED 2016-08-20 21:57:09,890 DEBUG [main] service.CompositeService: Stopping service #4: Service Dispatcher in state Dispatcher: STARTED 2016-08-20 21:57:09,893 DEBUG [main] service.AbstractService: Service: Dispatcher entered state STOPPED 2016-08-20 21:57:09,893 DEBUG [main] service.CompositeService: Stopping service #3: Service containers-monitor in state containers-monitor: STARTED 2016-08-20 21:57:09,894 DEBUG [main] service.AbstractService: Service: containers-monitor entered state STOPPED 2016-08-20 21:57:09,894 DEBUG [main] service.CompositeService: Stopping service #2: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices in state org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: STARTED 2016-08-20 21:57:09,894 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices entered state STOPPED 2016-08-20 21:57:09,894 DEBUG [main] service.CompositeService: Stopping service #1: Service containers-launcher in state containers-launcher: STARTED 2016-08-20 21:57:09,894 DEBUG [main] service.AbstractService: Service: containers-launcher entered state STOPPED 2016-08-20 21:57:09,894 DEBUG [main] service.CompositeService: Stopping service #0: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService in state org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: STARTED 2016-08-20 21:57:09,896 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService entered state STOPPED 2016-08-20 21:57:09,896 INFO [main] ipc.Server: Stopping server on 35029 2016-08-20 21:57:09,897 DEBUG [IPC Server handler 0 on 35029] ipc.Server: IPC Server handler 0 on 35029: exiting 2016-08-20 21:57:09,898 DEBUG [IPC Server handler 2 on 35029] ipc.Server: IPC Server handler 2 on 35029: exiting 2016-08-20 21:57:09,898 DEBUG [main] service.CompositeService: org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: stopping services, size=1 2016-08-20 21:57:09,898 DEBUG [IPC Server Responder] ipc.Server: Checking for old call responses. 2016-08-20 21:57:09,898 DEBUG [main] service.CompositeService: Stopping service #0: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker in state org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker: STARTED 2016-08-20 21:57:09,898 INFO [IPC Server Responder] ipc.Server: Stopping IPC Server Responder 2016-08-20 21:57:09,899 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker entered state STOPPED 2016-08-20 21:57:09,899 DEBUG [IPC Server handler 1 on 35029] ipc.Server: IPC Server handler 1 on 35029: exiting 2016-08-20 21:57:09,899 DEBUG [main] service.CompositeService: Stopping service #2: Service org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl in state org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl: STARTED 2016-08-20 21:57:09,899 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl entered state STOPPED 2016-08-20 21:57:09,899 INFO [IPC Server listener on 35029] ipc.Server: Stopping IPC Server listener on 35029 2016-08-20 21:57:09,899 DEBUG [IPC Server handler 4 on 35029] ipc.Server: IPC Server handler 4 on 35029: exiting 2016-08-20 21:57:09,900 WARN [Node Resource Monitor] nodemanager.NodeResourceMonitorImpl: org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl is interrupted. Exiting. 2016-08-20 21:57:09,899 INFO [Public Localizer] localizer.ResourceLocalizationService: Public cache exiting 2016-08-20 21:57:09,899 DEBUG [IPC Server handler 3 on 35029] ipc.Server: IPC Server handler 3 on 35029: exiting 2016-08-20 21:57:09,901 DEBUG [main] service.CompositeService: Stopping service #1: Service org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService in state org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService: STARTED 2016-08-20 21:57:09,902 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService entered state STOPPED 2016-08-20 21:57:09,902 DEBUG [main] service.CompositeService: org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService: stopping services, size=1 2016-08-20 21:57:09,902 DEBUG [main] service.CompositeService: Stopping service #0: Service org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService in state org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService: STARTED 2016-08-20 21:57:09,902 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService entered state STOPPED 2016-08-20 21:57:09,902 DEBUG [main] service.CompositeService: Stopping service #0: Service org.apache.hadoop.yarn.server.nodemanager.DeletionService in state org.apache.hadoop.yarn.server.nodemanager.DeletionService: STARTED 2016-08-20 21:57:09,904 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.DeletionService entered state STOPPED 2016-08-20 21:57:09,905 DEBUG [main] impl.MetricsSystemImpl: refCount=2 2016-08-20 21:57:09,905 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.nodemanager.recovery.NMNullStateStoreService entered state STOPPED 2016-08-20 21:57:09,905 DEBUG [main] service.CompositeService: Stopping service #0: Service org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper_0 in state org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper_0: STARTED 2016-08-20 21:57:09,905 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper_0 entered state STOPPED 2016-08-20 21:57:12,564 INFO [Timer-2] security.AMRMTokenSecretManager: Rolling master-key for amrm-tokens 2016-08-20 21:57:12,564 DEBUG [Timer-2] recovery.RMStateStore: Processing event of type UPDATE_AMRM_TOKEN 2016-08-20 21:57:12,564 INFO [Timer-2] recovery.RMStateStore: Updating AMRMToken 2016-08-20 21:57:13,046 DEBUG [IPC Server idle connection scanner for port 36239] ipc.Server: IPC Server idle connection scanner for port 36239: task running 2016-08-20 21:57:14,906 WARN [main] server.MiniYARNCluster: Stopping RM while some app masters are still alive 2016-08-20 21:57:14,906 DEBUG [main] service.AbstractService: Service: ResourceManager entered state STOPPED 2016-08-20 21:57:14,910 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.nio.SelectChannelConnector$1@51a651c1 2016-08-20 21:57:14,910 DEBUG [main] mortbay.log: stopping org.mortbay.jetty.webapp.WebAppContext@173f1614{/,jar:file:/opt/repo/org/apache/hadoop/hadoop-yarn-common/3.0.0-alpha2-SNAPSHOT/hadoop-yarn-common-3.0.0-alpha2-SNAPSHOT.jar!/webapps/cluster} 2016-08-20 21:57:14,910 DEBUG [main] mortbay.log: stopping SessionHandler@6c184d4d 2016-08-20 21:57:14,911 DEBUG [main] mortbay.log: stopping SecurityHandler@7645f03e 2016-08-20 21:57:14,911 DEBUG [main] mortbay.log: stopping ServletHandler@158e9f6e 2016-08-20 21:57:14,911 DEBUG [main] mortbay.log: stopped guice 2016-08-20 21:57:14,911 DEBUG [main] mortbay.log: stopped org.apache.hadoop.security.http.XFrameOptionsFilter 2016-08-20 21:57:14,911 DEBUG [main] mortbay.log: stopped static_user_filter 2016-08-20 21:57:14,911 DEBUG [main] delegation.AbstractDelegationTokenSecretManager: Stopping expired delegation token remover thread 2016-08-20 21:57:14,911 ERROR [Thread[Thread-205,5,main]] delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2016-08-20 21:57:14,912 DEBUG [main] mortbay.log: stopped RMAuthenticationFilter 2016-08-20 21:57:14,912 DEBUG [main] mortbay.log: stopped safety 2016-08-20 21:57:14,912 DEBUG [main] mortbay.log: stopped NoCacheFilter 2016-08-20 21:57:14,912 DEBUG [main] mortbay.log: stopped NoCacheFilter 2016-08-20 21:57:14,912 DEBUG [main] mortbay.log: stopped proxy 2016-08-20 21:57:14,912 DEBUG [main] mortbay.log: stopped conf 2016-08-20 21:57:14,912 DEBUG [main] mortbay.log: stopped jmx 2016-08-20 21:57:14,912 DEBUG [main] mortbay.log: stopped logLevel 2016-08-20 21:57:14,913 DEBUG [main] mortbay.log: stopped stacks 2016-08-20 21:57:14,913 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.DefaultServlet$NIOResourceCache@7a6ea47d 2016-08-20 21:57:14,913 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.DefaultServlet-166694583 2016-08-20 21:57:14,913 DEBUG [main] mortbay.log: stopped ServletHandler@158e9f6e 2016-08-20 21:57:14,913 DEBUG [main] mortbay.log: stopped SecurityHandler@7645f03e 2016-08-20 21:57:14,913 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.HashSessionManager@54b2fc58 2016-08-20 21:57:14,914 DEBUG [main] mortbay.log: stopped SessionHandler@6c184d4d 2016-08-20 21:57:14,914 DEBUG [main] mortbay.log: stopping ErrorPageErrorHandler@daf22f0 2016-08-20 21:57:14,914 DEBUG [main] mortbay.log: stopped ErrorPageErrorHandler@daf22f0 2016-08-20 21:57:14,914 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - guice as filter 2016-08-20 21:57:14,915 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - org.apache.hadoop.security.http.XFrameOptionsFilter as filter 2016-08-20 21:57:14,915 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - static_user_filter as filter 2016-08-20 21:57:14,915 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - RMAuthenticationFilter as filter 2016-08-20 21:57:14,915 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - safety as filter 2016-08-20 21:57:14,915 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - NoCacheFilter as filter 2016-08-20 21:57:14,915 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - NoCacheFilter as filter 2016-08-20 21:57:14,915 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=guice,[/*],[],15) as filterMapping 2016-08-20 21:57:14,915 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=org.apache.hadoop.security.http.XFrameOptionsFilter,[/*],[],15) as filterMapping 2016-08-20 21:57:14,915 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=static_user_filter,[/proxy/*],[],15) as filterMapping 2016-08-20 21:57:14,915 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=RMAuthenticationFilter,[/proxy/*],[],15) as filterMapping 2016-08-20 21:57:14,915 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=static_user_filter,[/ws/*],[],15) as filterMapping 2016-08-20 21:57:14,915 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=RMAuthenticationFilter,[/ws/*],[],15) as filterMapping 2016-08-20 21:57:14,915 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=static_user_filter,[/cluster/*],[],15) as filterMapping 2016-08-20 21:57:14,915 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=RMAuthenticationFilter,[/cluster/*],[],15) as filterMapping 2016-08-20 21:57:14,916 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=static_user_filter,[/conf],[],15) as filterMapping 2016-08-20 21:57:14,916 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=RMAuthenticationFilter,[/conf],[],15) as filterMapping 2016-08-20 21:57:14,916 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=static_user_filter,[/jmx],[],15) as filterMapping 2016-08-20 21:57:14,916 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=RMAuthenticationFilter,[/jmx],[],15) as filterMapping 2016-08-20 21:57:14,916 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=static_user_filter,[/logLevel],[],15) as filterMapping 2016-08-20 21:57:14,916 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=RMAuthenticationFilter,[/logLevel],[],15) as filterMapping 2016-08-20 21:57:14,916 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=static_user_filter,[/stacks],[],15) as filterMapping 2016-08-20 21:57:14,916 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=RMAuthenticationFilter,[/stacks],[],15) as filterMapping 2016-08-20 21:57:14,916 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=static_user_filter,[*.html, *.jsp],[],15) as filterMapping 2016-08-20 21:57:14,916 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=RMAuthenticationFilter,[*.html, *.jsp],[],15) as filterMapping 2016-08-20 21:57:14,917 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=safety,[/*],[],15) as filterMapping 2016-08-20 21:57:14,917 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=NoCacheFilter,[/*],[],15) as filterMapping 2016-08-20 21:57:14,917 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (F=NoCacheFilter,[/*],[],15) as filterMapping 2016-08-20 21:57:14,917 DEBUG [main] mortbay.log: filterNameMap=null 2016-08-20 21:57:14,917 DEBUG [main] mortbay.log: pathFilters=null 2016-08-20 21:57:14,917 DEBUG [main] mortbay.log: servletFilterMap=null 2016-08-20 21:57:14,917 DEBUG [main] mortbay.log: servletPathMap={/jmx=jmx, /proxy/*=proxy, /conf=conf, /stacks=stacks, /logLevel=logLevel, /=org.mortbay.jetty.servlet.DefaultServlet-166694583} 2016-08-20 21:57:14,919 DEBUG [main] mortbay.log: servletNameMap={org.mortbay.jetty.servlet.DefaultServlet-166694583=org.mortbay.jetty.servlet.DefaultServlet-166694583, proxy=proxy, logLevel=logLevel, jmx=jmx, stacks=stacks, conf=conf} 2016-08-20 21:57:14,919 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - proxy as servlet 2016-08-20 21:57:14,919 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - conf as servlet 2016-08-20 21:57:14,919 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - jmx as servlet 2016-08-20 21:57:14,920 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - logLevel as servlet 2016-08-20 21:57:14,920 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - stacks as servlet 2016-08-20 21:57:14,920 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - org.mortbay.jetty.servlet.DefaultServlet-166694583 as servlet 2016-08-20 21:57:14,920 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (S=proxy,[/proxy/*]) as servletMapping 2016-08-20 21:57:14,920 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (S=conf,[/conf]) as servletMapping 2016-08-20 21:57:14,920 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (S=jmx,[/jmx]) as servletMapping 2016-08-20 21:57:14,920 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (S=logLevel,[/logLevel]) as servletMapping 2016-08-20 21:57:14,920 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (S=stacks,[/stacks]) as servletMapping 2016-08-20 21:57:14,921 DEBUG [main] mortbay.log: Container ServletHandler@158e9f6e - (S=org.mortbay.jetty.servlet.DefaultServlet-166694583,[/]) as servletMapping 2016-08-20 21:57:14,921 DEBUG [main] mortbay.log: filterNameMap=null 2016-08-20 21:57:14,921 DEBUG [main] mortbay.log: pathFilters=null 2016-08-20 21:57:14,921 DEBUG [main] mortbay.log: servletFilterMap=null 2016-08-20 21:57:14,921 DEBUG [main] mortbay.log: servletPathMap=null 2016-08-20 21:57:14,921 DEBUG [main] mortbay.log: servletNameMap=null 2016-08-20 21:57:14,922 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.webapp.WebAppContext@173f1614{/,jar:file:/opt/repo/org/apache/hadoop/hadoop-yarn-common/3.0.0-alpha2-SNAPSHOT/hadoop-yarn-common-3.0.0-alpha2-SNAPSHOT.jar!/webapps/cluster} 2016-08-20 21:57:14,923 INFO [main] mortbay.log: Stopped SelectChannelConnector@localhost:0 2016-08-20 21:57:14,923 DEBUG [main] mortbay.log: stopped SelectChannelConnector@localhost:0 2016-08-20 21:57:14,923 DEBUG [main] mortbay.log: stopping Server@2b4d4327 2016-08-20 21:57:14,923 DEBUG [main] mortbay.log: stopping ContextHandlerCollection@16da1abc 2016-08-20 21:57:14,923 DEBUG [main] mortbay.log: stopping org.mortbay.jetty.servlet.Context@23fb172e{/static,jar:file:/opt/repo/org/apache/hadoop/hadoop-yarn-common/3.0.0-alpha2-SNAPSHOT/hadoop-yarn-common-3.0.0-alpha2-SNAPSHOT.jar!/webapps/static} 2016-08-20 21:57:14,923 DEBUG [main] mortbay.log: stopping SessionHandler@671ea6ff 2016-08-20 21:57:14,923 DEBUG [main] mortbay.log: stopping ServletHandler@1c52552f 2016-08-20 21:57:14,923 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.DefaultServlet-697508322 2016-08-20 21:57:14,923 DEBUG [main] mortbay.log: stopped ServletHandler@1c52552f 2016-08-20 21:57:14,924 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.HashSessionManager@5dc769f9 2016-08-20 21:57:14,924 DEBUG [main] mortbay.log: stopped SessionHandler@671ea6ff 2016-08-20 21:57:14,924 DEBUG [main] mortbay.log: stopping ErrorHandler@1b0e9707 2016-08-20 21:57:14,927 DEBUG [main] mortbay.log: stopped ErrorHandler@1b0e9707 2016-08-20 21:57:14,927 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.Context@23fb172e{/static,jar:file:/opt/repo/org/apache/hadoop/hadoop-yarn-common/3.0.0-alpha2-SNAPSHOT/hadoop-yarn-common-3.0.0-alpha2-SNAPSHOT.jar!/webapps/static} 2016-08-20 21:57:14,927 DEBUG [main] mortbay.log: stopping org.mortbay.jetty.servlet.Context@732f29af{/logs,file:/opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/log} 2016-08-20 21:57:14,927 DEBUG [main] mortbay.log: stopping SessionHandler@9b5f3c7 2016-08-20 21:57:14,927 DEBUG [main] mortbay.log: stopping ServletHandler@74024f3 2016-08-20 21:57:14,927 DEBUG [main] mortbay.log: stopped org.apache.hadoop.http.AdminAuthorizedServlet-1166106620 2016-08-20 21:57:14,928 DEBUG [main] mortbay.log: stopped ServletHandler@74024f3 2016-08-20 21:57:14,928 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.HashSessionManager@61ae0d43 2016-08-20 21:57:14,928 DEBUG [main] mortbay.log: stopped SessionHandler@9b5f3c7 2016-08-20 21:57:14,928 DEBUG [main] mortbay.log: stopping ErrorHandler@ef718de 2016-08-20 21:57:14,928 DEBUG [main] mortbay.log: stopped ErrorHandler@ef718de 2016-08-20 21:57:14,928 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.Context@732f29af{/logs,file:/opt/hadooptrunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/log} 2016-08-20 21:57:14,928 DEBUG [main] mortbay.log: stopped ContextHandlerCollection@16da1abc 2016-08-20 21:57:14,928 DEBUG [main] mortbay.log: stopped org.mortbay.jetty.servlet.HashSessionIdManager@287ae90c 2016-08-20 21:57:15,029 DEBUG [main] mortbay.log: stopped org.mortbay.thread.QueuedThreadPool@31000e60 2016-08-20 21:57:15,029 DEBUG [main] mortbay.log: stopped Server@2b4d4327 2016-08-20 21:57:15,029 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.resourcemanager.ClientRMService entered state STOPPED 2016-08-20 21:57:15,029 INFO [main] ipc.Server: Stopping server on 44193 2016-08-20 21:57:15,029 DEBUG [IPC Server handler 0 on 44193] ipc.Server: IPC Server handler 0 on 44193: exiting 2016-08-20 21:57:15,029 DEBUG [IPC Server handler 2 on 44193] ipc.Server: IPC Server handler 2 on 44193: exiting 2016-08-20 21:57:15,030 DEBUG [IPC Server handler 3 on 44193] ipc.Server: IPC Server handler 3 on 44193: exiting 2016-08-20 21:57:15,029 DEBUG [IPC Server handler 1 on 44193] ipc.Server: IPC Server handler 1 on 44193: exiting 2016-08-20 21:57:15,030 DEBUG [IPC Server handler 4 on 44193] ipc.Server: IPC Server handler 4 on 44193: exiting 2016-08-20 21:57:15,030 DEBUG [IPC Server handler 5 on 44193] ipc.Server: IPC Server handler 5 on 44193: exiting 2016-08-20 21:57:15,030 DEBUG [IPC Server handler 6 on 44193] ipc.Server: IPC Server handler 6 on 44193: exiting 2016-08-20 21:57:15,030 DEBUG [IPC Server handler 7 on 44193] ipc.Server: IPC Server handler 7 on 44193: exiting 2016-08-20 21:57:15,031 DEBUG [IPC Server handler 9 on 44193] ipc.Server: IPC Server handler 9 on 44193: exiting 2016-08-20 21:57:15,031 DEBUG [IPC Server handler 13 on 44193] ipc.Server: IPC Server handler 13 on 44193: exiting 2016-08-20 21:57:15,031 DEBUG [IPC Server handler 12 on 44193] ipc.Server: IPC Server handler 12 on 44193: exiting 2016-08-20 21:57:15,031 DEBUG [IPC Server handler 11 on 44193] ipc.Server: IPC Server handler 11 on 44193: exiting 2016-08-20 21:57:15,033 DEBUG [IPC Server handler 18 on 44193] ipc.Server: IPC Server handler 18 on 44193: exiting 2016-08-20 21:57:15,033 DEBUG [IPC Server handler 27 on 44193] ipc.Server: IPC Server handler 27 on 44193: exiting 2016-08-20 21:57:15,031 DEBUG [IPC Server handler 10 on 44193] ipc.Server: IPC Server handler 10 on 44193: exiting 2016-08-20 21:57:15,034 DEBUG [IPC Server handler 32 on 44193] ipc.Server: IPC Server handler 32 on 44193: exiting 2016-08-20 21:57:15,034 DEBUG [IPC Server handler 31 on 44193] ipc.Server: IPC Server handler 31 on 44193: exiting 2016-08-20 21:57:15,034 DEBUG [IPC Server handler 37 on 44193] ipc.Server: IPC Server handler 37 on 44193: exiting 2016-08-20 21:57:15,035 DEBUG [IPC Server handler 39 on 44193] ipc.Server: IPC Server handler 39 on 44193: exiting 2016-08-20 21:57:15,034 DEBUG [IPC Server handler 30 on 44193] ipc.Server: IPC Server handler 30 on 44193: exiting 2016-08-20 21:57:15,034 DEBUG [IPC Server handler 29 on 44193] ipc.Server: IPC Server handler 29 on 44193: exiting 2016-08-20 21:57:15,035 DEBUG [IPC Server handler 41 on 44193] ipc.Server: IPC Server handler 41 on 44193: exiting 2016-08-20 21:57:15,035 DEBUG [IPC Server handler 44 on 44193] ipc.Server: IPC Server handler 44 on 44193: exiting 2016-08-20 21:57:15,033 DEBUG [IPC Server handler 28 on 44193] ipc.Server: IPC Server handler 28 on 44193: exiting 2016-08-20 21:57:15,035 DEBUG [IPC Server Responder] ipc.Server: Checking for old call responses. 2016-08-20 21:57:15,033 DEBUG [IPC Server handler 26 on 44193] ipc.Server: IPC Server handler 26 on 44193: exiting 2016-08-20 21:57:15,033 DEBUG [IPC Server handler 25 on 44193] ipc.Server: IPC Server handler 25 on 44193: exiting 2016-08-20 21:57:15,036 DEBUG [main] service.CompositeService: ResourceManager: stopping services, size=3 2016-08-20 21:57:15,033 DEBUG [IPC Server handler 23 on 44193] ipc.Server: IPC Server handler 23 on 44193: exiting 2016-08-20 21:57:15,033 DEBUG [IPC Server handler 24 on 44193] ipc.Server: IPC Server handler 24 on 44193: exiting 2016-08-20 21:57:15,033 DEBUG [IPC Server handler 22 on 44193] ipc.Server: IPC Server handler 22 on 44193: exiting 2016-08-20 21:57:15,033 DEBUG [IPC Server handler 20 on 44193] ipc.Server: IPC Server handler 20 on 44193: exiting 2016-08-20 21:57:15,037 DEBUG [main] service.CompositeService: Stopping service #2: Service org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter in state org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: STARTED 2016-08-20 21:57:15,033 DEBUG [IPC Server handler 21 on 44193] ipc.Server: IPC Server handler 21 on 44193: exiting 2016-08-20 21:57:15,032 DEBUG [IPC Server handler 16 on 44193] ipc.Server: IPC Server handler 16 on 44193: exiting 2016-08-20 21:57:15,032 DEBUG [IPC Server handler 15 on 44193] ipc.Server: IPC Server handler 15 on 44193: exiting 2016-08-20 21:57:15,032 DEBUG [IPC Server handler 19 on 44193] ipc.Server: IPC Server handler 19 on 44193: exiting 2016-08-20 21:57:15,031 DEBUG [IPC Server handler 17 on 44193] ipc.Server: IPC Server handler 17 on 44193: exiting 2016-08-20 21:57:15,031 DEBUG [IPC Server handler 8 on 44193] ipc.Server: IPC Server handler 8 on 44193: exiting 2016-08-20 21:57:15,031 DEBUG [IPC Server handler 14 on 44193] ipc.Server: IPC Server handler 14 on 44193: exiting 2016-08-20 21:57:15,037 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter entered state STOPPED 2016-08-20 21:57:15,040 DEBUG [main] service.CompositeService: org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: stopping services, size=0 2016-08-20 21:57:15,036 INFO [IPC Server Responder] ipc.Server: Stopping IPC Server Responder 2016-08-20 21:57:15,035 INFO [IPC Server listener on 44193] ipc.Server: Stopping IPC Server listener on 44193 2016-08-20 21:57:15,035 DEBUG [IPC Server handler 48 on 44193] ipc.Server: IPC Server handler 48 on 44193: exiting 2016-08-20 21:57:15,035 DEBUG [IPC Server handler 49 on 44193] ipc.Server: IPC Server handler 49 on 44193: exiting 2016-08-20 21:57:15,035 DEBUG [IPC Server handler 47 on 44193] ipc.Server: IPC Server handler 47 on 44193: exiting 2016-08-20 21:57:15,035 DEBUG [IPC Server handler 46 on 44193] ipc.Server: IPC Server handler 46 on 44193: exiting 2016-08-20 21:57:15,035 DEBUG [IPC Server handler 45 on 44193] ipc.Server: IPC Server handler 45 on 44193: exiting 2016-08-20 21:57:15,035 DEBUG [IPC Server handler 38 on 44193] ipc.Server: IPC Server handler 38 on 44193: exiting 2016-08-20 21:57:15,035 DEBUG [IPC Server handler 43 on 44193] ipc.Server: IPC Server handler 43 on 44193: exiting 2016-08-20 21:57:15,035 DEBUG [IPC Server handler 42 on 44193] ipc.Server: IPC Server handler 42 on 44193: exiting 2016-08-20 21:57:15,035 DEBUG [IPC Server handler 40 on 44193] ipc.Server: IPC Server handler 40 on 44193: exiting 2016-08-20 21:57:15,034 DEBUG [IPC Server handler 36 on 44193] ipc.Server: IPC Server handler 36 on 44193: exiting 2016-08-20 21:57:15,034 DEBUG [IPC Server handler 34 on 44193] ipc.Server: IPC Server handler 34 on 44193: exiting 2016-08-20 21:57:15,034 DEBUG [IPC Server handler 35 on 44193] ipc.Server: IPC Server handler 35 on 44193: exiting 2016-08-20 21:57:15,034 DEBUG [IPC Server handler 33 on 44193] ipc.Server: IPC Server handler 33 on 44193: exiting 2016-08-20 21:57:15,041 DEBUG [IPC Server listener on 44193] ipc.Server: IPC Server listener on 44193: disconnecting client 127.0.0.1:58714. Number of active connections: 0 2016-08-20 21:57:15,041 DEBUG [main] service.CompositeService: Stopping service #1: Service org.apache.hadoop.yarn.server.resourcemanager.AdminService in state org.apache.hadoop.yarn.server.resourcemanager.AdminService: STARTED 2016-08-20 21:57:15,043 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.resourcemanager.AdminService entered state STOPPED 2016-08-20 21:57:15,043 INFO [main] ipc.Server: Stopping server on 36239 2016-08-20 21:57:15,044 DEBUG [IPC Server handler 0 on 36239] ipc.Server: IPC Server handler 0 on 36239: exiting 2016-08-20 21:57:15,046 INFO [IPC Server listener on 36239] ipc.Server: Stopping IPC Server listener on 36239 2016-08-20 21:57:15,047 DEBUG [IPC Server Responder] ipc.Server: Checking for old call responses. 2016-08-20 21:57:15,047 DEBUG [main] service.CompositeService: org.apache.hadoop.yarn.server.resourcemanager.AdminService: stopping services, size=0 2016-08-20 21:57:15,047 INFO [IPC Server Responder] ipc.Server: Stopping IPC Server Responder 2016-08-20 21:57:15,047 DEBUG [main] service.CompositeService: Stopping service #0: Service Dispatcher in state Dispatcher: STARTED 2016-08-20 21:57:15,047 DEBUG [main] service.AbstractService: Service: Dispatcher entered state STOPPED 2016-08-20 21:57:15,047 INFO [main] resourcemanager.ResourceManager: Transitioning to standby state 2016-08-20 21:57:15,047 DEBUG [main] service.AbstractService: Service: RMActiveServices entered state STOPPED 2016-08-20 21:57:15,047 DEBUG [main] service.CompositeService: RMActiveServices: stopping services, size=14 2016-08-20 21:57:15,047 DEBUG [main] service.CompositeService: Stopping service #13: Service org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher in state org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher: STARTED 2016-08-20 21:57:15,047 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher entered state STOPPED 2016-08-20 21:57:15,048 WARN [ApplicationMaster Launcher] amlauncher.ApplicationMasterLauncher: org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher$LauncherThread interrupted. Returning. 2016-08-20 21:57:15,048 DEBUG [main] service.CompositeService: Stopping service #12: Service org.apache.hadoop.yarn.server.resourcemanager.ClientRMService in state org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: STOPPED 2016-08-20 21:57:15,048 DEBUG [main] service.CompositeService: Stopping service #11: Service org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService in state org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: STARTED 2016-08-20 21:57:15,048 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService entered state STOPPED 2016-08-20 21:57:15,048 INFO [main] ipc.Server: Stopping server on 37347 2016-08-20 21:57:15,049 DEBUG [IPC Server handler 0 on 37347] ipc.Server: IPC Server handler 0 on 37347: exiting 2016-08-20 21:57:15,049 DEBUG [IPC Server handler 5 on 37347] ipc.Server: IPC Server handler 5 on 37347: exiting 2016-08-20 21:57:15,049 DEBUG [IPC Server handler 8 on 37347] ipc.Server: IPC Server handler 8 on 37347: exiting 2016-08-20 21:57:15,049 DEBUG [IPC Server handler 9 on 37347] ipc.Server: IPC Server handler 9 on 37347: exiting 2016-08-20 21:57:15,050 DEBUG [IPC Server handler 4 on 37347] ipc.Server: IPC Server handler 4 on 37347: exiting 2016-08-20 21:57:15,050 INFO [IPC Server listener on 37347] ipc.Server: Stopping IPC Server listener on 37347 2016-08-20 21:57:15,049 DEBUG [IPC Server handler 3 on 37347] ipc.Server: IPC Server handler 3 on 37347: exiting 2016-08-20 21:57:15,050 DEBUG [IPC Server handler 24 on 37347] ipc.Server: IPC Server handler 24 on 37347: exiting 2016-08-20 21:57:15,050 DEBUG [IPC Server listener on 37347] ipc.Server: IPC Server listener on 37347: disconnecting client 127.0.0.1:46672. Number of active connections: 0 2016-08-20 21:57:15,050 DEBUG [IPC Server handler 22 on 37347] ipc.Server: IPC Server handler 22 on 37347: exiting 2016-08-20 21:57:15,050 DEBUG [IPC Server handler 26 on 37347] ipc.Server: IPC Server handler 26 on 37347: exiting 2016-08-20 21:57:15,051 DEBUG [IPC Server handler 27 on 37347] ipc.Server: IPC Server handler 27 on 37347: exiting 2016-08-20 21:57:15,051 DEBUG [IPC Server handler 38 on 37347] ipc.Server: IPC Server handler 38 on 37347: exiting 2016-08-20 21:57:15,051 DEBUG [IPC Server handler 39 on 37347] ipc.Server: IPC Server handler 39 on 37347: exiting 2016-08-20 21:57:15,051 DEBUG [IPC Server handler 41 on 37347] ipc.Server: IPC Server handler 41 on 37347: exiting 2016-08-20 21:57:15,050 DEBUG [IPC Server handler 21 on 37347] ipc.Server: IPC Server handler 21 on 37347: exiting 2016-08-20 21:57:15,050 DEBUG [IPC Server handler 25 on 37347] ipc.Server: IPC Server handler 25 on 37347: exiting 2016-08-20 21:57:15,050 DEBUG [IPC Server handler 16 on 37347] ipc.Server: IPC Server handler 16 on 37347: exiting 2016-08-20 21:57:15,050 DEBUG [IPC Server handler 7 on 37347] ipc.Server: IPC Server handler 7 on 37347: exiting 2016-08-20 21:57:15,050 DEBUG [IPC Server handler 2 on 37347] ipc.Server: IPC Server handler 2 on 37347: exiting 2016-08-20 21:57:15,050 DEBUG [IPC Server handler 20 on 37347] ipc.Server: IPC Server handler 20 on 37347: exiting 2016-08-20 21:57:15,049 DEBUG [IPC Server handler 17 on 37347] ipc.Server: IPC Server handler 17 on 37347: exiting 2016-08-20 21:57:15,049 DEBUG [IPC Server handler 15 on 37347] ipc.Server: IPC Server handler 15 on 37347: exiting 2016-08-20 21:57:15,049 DEBUG [IPC Server handler 13 on 37347] ipc.Server: IPC Server handler 13 on 37347: exiting 2016-08-20 21:57:15,049 DEBUG [IPC Server handler 12 on 37347] ipc.Server: IPC Server handler 12 on 37347: exiting 2016-08-20 21:57:15,056 DEBUG [IPC Server handler 1 on 37347] ipc.Server: IPC Server handler 1 on 37347: exiting 2016-08-20 21:57:15,056 DEBUG [IPC Server handler 23 on 37347] ipc.Server: IPC Server handler 23 on 37347: exiting 2016-08-20 21:57:15,056 DEBUG [IPC Server handler 28 on 37347] ipc.Server: IPC Server handler 28 on 37347: exiting 2016-08-20 21:57:15,056 DEBUG [IPC Server handler 30 on 37347] ipc.Server: IPC Server handler 30 on 37347: exiting 2016-08-20 21:57:15,057 DEBUG [IPC Server handler 35 on 37347] ipc.Server: IPC Server handler 35 on 37347: exiting 2016-08-20 21:57:15,049 DEBUG [IPC Server handler 14 on 37347] ipc.Server: IPC Server handler 14 on 37347: exiting 2016-08-20 21:57:15,049 DEBUG [IPC Server handler 11 on 37347] ipc.Server: IPC Server handler 11 on 37347: exiting 2016-08-20 21:57:15,056 DEBUG [IPC Server handler 19 on 37347] ipc.Server: IPC Server handler 19 on 37347: exiting 2016-08-20 21:57:15,056 DEBUG [IPC Server handler 18 on 37347] ipc.Server: IPC Server handler 18 on 37347: exiting 2016-08-20 21:57:15,056 DEBUG [IPC Server handler 10 on 37347] ipc.Server: IPC Server handler 10 on 37347: exiting 2016-08-20 21:57:15,056 DEBUG [IPC Server handler 6 on 37347] ipc.Server: IPC Server handler 6 on 37347: exiting 2016-08-20 21:57:15,052 DEBUG [IPC Server handler 33 on 37347] ipc.Server: IPC Server handler 33 on 37347: exiting 2016-08-20 21:57:15,052 DEBUG [IPC Server Responder] ipc.Server: Checking for old call responses. 2016-08-20 21:57:15,052 DEBUG [IPC Server handler 34 on 37347] ipc.Server: IPC Server handler 34 on 37347: exiting 2016-08-20 21:57:15,052 DEBUG [IPC Server handler 37 on 37347] ipc.Server: IPC Server handler 37 on 37347: exiting 2016-08-20 21:57:15,052 DEBUG [IPC Server handler 36 on 37347] ipc.Server: IPC Server handler 36 on 37347: exiting 2016-08-20 21:57:15,052 DEBUG [IPC Server handler 48 on 37347] ipc.Server: IPC Server handler 48 on 37347: exiting 2016-08-20 21:57:15,052 DEBUG [IPC Server handler 47 on 37347] ipc.Server: IPC Server handler 47 on 37347: exiting 2016-08-20 21:57:15,052 DEBUG [IPC Server handler 45 on 37347] ipc.Server: IPC Server handler 45 on 37347: exiting 2016-08-20 21:57:15,052 DEBUG [IPC Server handler 31 on 37347] ipc.Server: IPC Server handler 31 on 37347: exiting 2016-08-20 21:57:15,052 DEBUG [IPC Server handler 44 on 37347] ipc.Server: IPC Server handler 44 on 37347: exiting 2016-08-20 21:57:15,052 DEBUG [IPC Server handler 49 on 37347] ipc.Server: IPC Server handler 49 on 37347: exiting 2016-08-20 21:57:15,052 DEBUG [main] service.CompositeService: Stopping service #10: Service org.apache.hadoop.util.JvmPauseMonitor in state org.apache.hadoop.util.JvmPauseMonitor: STARTED 2016-08-20 21:57:15,052 DEBUG [IPC Server handler 43 on 37347] ipc.Server: IPC Server handler 43 on 37347: exiting 2016-08-20 21:57:15,051 DEBUG [IPC Server handler 46 on 37347] ipc.Server: IPC Server handler 46 on 37347: exiting 2016-08-20 21:57:15,051 DEBUG [IPC Server handler 42 on 37347] ipc.Server: IPC Server handler 42 on 37347: exiting 2016-08-20 21:57:15,051 DEBUG [IPC Server handler 40 on 37347] ipc.Server: IPC Server handler 40 on 37347: exiting 2016-08-20 21:57:15,051 DEBUG [IPC Server handler 32 on 37347] ipc.Server: IPC Server handler 32 on 37347: exiting 2016-08-20 21:57:15,051 DEBUG [IPC Server handler 29 on 37347] ipc.Server: IPC Server handler 29 on 37347: exiting 2016-08-20 21:57:15,059 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.util.JvmPauseMonitor entered state STOPPED 2016-08-20 21:57:15,058 INFO [IPC Server Responder] ipc.Server: Stopping IPC Server Responder 2016-08-20 21:57:15,060 DEBUG [main] service.CompositeService: Stopping service #9: Service org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService in state org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: STARTED 2016-08-20 21:57:15,060 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService entered state STOPPED 2016-08-20 21:57:15,060 INFO [main] ipc.Server: Stopping server on 45325 2016-08-20 21:57:15,061 DEBUG [IPC Server handler 0 on 45325] ipc.Server: IPC Server handler 0 on 45325: exiting 2016-08-20 21:57:15,061 DEBUG [IPC Server handler 2 on 45325] ipc.Server: IPC Server handler 2 on 45325: exiting 2016-08-20 21:57:15,061 DEBUG [IPC Server handler 3 on 45325] ipc.Server: IPC Server handler 3 on 45325: exiting 2016-08-20 21:57:15,061 DEBUG [IPC Server handler 6 on 45325] ipc.Server: IPC Server handler 6 on 45325: exiting 2016-08-20 21:57:15,061 DEBUG [IPC Server handler 9 on 45325] ipc.Server: IPC Server handler 9 on 45325: exiting 2016-08-20 21:57:15,061 DEBUG [IPC Server handler 21 on 45325] ipc.Server: IPC Server handler 21 on 45325: exiting 2016-08-20 21:57:15,061 DEBUG [IPC Server handler 1 on 45325] ipc.Server: IPC Server handler 1 on 45325: exiting 2016-08-20 21:57:15,062 DEBUG [IPC Server handler 8 on 45325] ipc.Server: IPC Server handler 8 on 45325: exiting 2016-08-20 21:57:15,064 DEBUG [IPC Server handler 48 on 45325] ipc.Server: IPC Server handler 48 on 45325: exiting 2016-08-20 21:57:15,062 DEBUG [IPC Server handler 7 on 45325] ipc.Server: IPC Server handler 7 on 45325: exiting 2016-08-20 21:57:15,062 DEBUG [IPC Server handler 14 on 45325] ipc.Server: IPC Server handler 14 on 45325: exiting 2016-08-20 21:57:15,062 DEBUG [IPC Server handler 5 on 45325] ipc.Server: IPC Server handler 5 on 45325: exiting 2016-08-20 21:57:15,061 DEBUG [IPC Server handler 24 on 45325] ipc.Server: IPC Server handler 24 on 45325: exiting 2016-08-20 21:57:15,061 DEBUG [IPC Server handler 4 on 45325] ipc.Server: IPC Server handler 4 on 45325: exiting 2016-08-20 21:57:15,061 DEBUG [IPC Server handler 22 on 45325] ipc.Server: IPC Server handler 22 on 45325: exiting 2016-08-20 21:57:15,061 DEBUG [IPC Server handler 20 on 45325] ipc.Server: IPC Server handler 20 on 45325: exiting 2016-08-20 21:57:15,061 DEBUG [IPC Server handler 18 on 45325] ipc.Server: IPC Server handler 18 on 45325: exiting 2016-08-20 21:57:15,061 DEBUG [IPC Server handler 16 on 45325] ipc.Server: IPC Server handler 16 on 45325: exiting 2016-08-20 21:57:15,061 DEBUG [IPC Server handler 15 on 45325] ipc.Server: IPC Server handler 15 on 45325: exiting 2016-08-20 21:57:15,061 DEBUG [IPC Server handler 12 on 45325] ipc.Server: IPC Server handler 12 on 45325: exiting 2016-08-20 21:57:15,061 DEBUG [IPC Server handler 11 on 45325] ipc.Server: IPC Server handler 11 on 45325: exiting 2016-08-20 21:57:15,064 DEBUG [IPC Server handler 47 on 45325] ipc.Server: IPC Server handler 47 on 45325: exiting 2016-08-20 21:57:15,064 DEBUG [IPC Server handler 49 on 45325] ipc.Server: IPC Server handler 49 on 45325: exiting 2016-08-20 21:57:15,064 DEBUG [IPC Server Responder] ipc.Server: Checking for old call responses. 2016-08-20 21:57:15,064 DEBUG [main] service.CompositeService: Stopping service #8: Service NMLivelinessMonitor in state NMLivelinessMonitor: STARTED 2016-08-20 21:57:15,064 DEBUG [IPC Server handler 44 on 45325] ipc.Server: IPC Server handler 44 on 45325: exiting 2016-08-20 21:57:15,064 DEBUG [IPC Server handler 46 on 45325] ipc.Server: IPC Server handler 46 on 45325: exiting 2016-08-20 21:57:15,064 DEBUG [IPC Server handler 45 on 45325] ipc.Server: IPC Server handler 45 on 45325: exiting 2016-08-20 21:57:15,064 DEBUG [IPC Server handler 43 on 45325] ipc.Server: IPC Server handler 43 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 39 on 45325] ipc.Server: IPC Server handler 39 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 42 on 45325] ipc.Server: IPC Server handler 42 on 45325: exiting 2016-08-20 21:57:15,063 INFO [IPC Server listener on 45325] ipc.Server: Stopping IPC Server listener on 45325 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 35 on 45325] ipc.Server: IPC Server handler 35 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 41 on 45325] ipc.Server: IPC Server handler 41 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 40 on 45325] ipc.Server: IPC Server handler 40 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 38 on 45325] ipc.Server: IPC Server handler 38 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 37 on 45325] ipc.Server: IPC Server handler 37 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 36 on 45325] ipc.Server: IPC Server handler 36 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 34 on 45325] ipc.Server: IPC Server handler 34 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 26 on 45325] ipc.Server: IPC Server handler 26 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 33 on 45325] ipc.Server: IPC Server handler 33 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 23 on 45325] ipc.Server: IPC Server handler 23 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 32 on 45325] ipc.Server: IPC Server handler 32 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 31 on 45325] ipc.Server: IPC Server handler 31 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 30 on 45325] ipc.Server: IPC Server handler 30 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 29 on 45325] ipc.Server: IPC Server handler 29 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 28 on 45325] ipc.Server: IPC Server handler 28 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 27 on 45325] ipc.Server: IPC Server handler 27 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 19 on 45325] ipc.Server: IPC Server handler 19 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 17 on 45325] ipc.Server: IPC Server handler 17 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 13 on 45325] ipc.Server: IPC Server handler 13 on 45325: exiting 2016-08-20 21:57:15,063 DEBUG [IPC Server handler 25 on 45325] ipc.Server: IPC Server handler 25 on 45325: exiting 2016-08-20 21:57:15,062 DEBUG [IPC Server handler 10 on 45325] ipc.Server: IPC Server handler 10 on 45325: exiting 2016-08-20 21:57:15,066 DEBUG [main] service.AbstractService: Service: NMLivelinessMonitor entered state STOPPED 2016-08-20 21:57:15,066 INFO [IPC Server Responder] ipc.Server: Stopping IPC Server Responder 2016-08-20 21:57:15,070 DEBUG [main] service.CompositeService: Stopping service #7: Service SchedulerEventDispatcher in state SchedulerEventDispatcher: STARTED 2016-08-20 21:57:15,071 DEBUG [main] service.AbstractService: Service: SchedulerEventDispatcher entered state STOPPED 2016-08-20 21:57:15,071 INFO [Ping Checker] util.AbstractLivelinessMonitor: NMLivelinessMonitor thread interrupted 2016-08-20 21:57:15,071 ERROR [SchedulerEventDispatcher:Event Processor] event.EventDispatcher: Returning, interrupted : java.lang.InterruptedException 2016-08-20 21:57:15,072 DEBUG [main] service.CompositeService: Stopping service #6: Service org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler in state org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: STARTED 2016-08-20 21:57:15,072 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler entered state STOPPED 2016-08-20 21:57:15,072 DEBUG [main] service.CompositeService: Stopping service #5: Service org.apache.hadoop.yarn.server.resourcemanager.NodesListManager in state org.apache.hadoop.yarn.server.resourcemanager.NodesListManager: STARTED 2016-08-20 21:57:15,072 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.resourcemanager.NodesListManager entered state STOPPED 2016-08-20 21:57:15,072 DEBUG [main] service.CompositeService: Stopping service #4: Service org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager in state org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager: STARTED 2016-08-20 21:57:15,072 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager entered state STOPPED 2016-08-20 21:57:15,072 DEBUG [main] service.AbstractService: Service: Dispatcher entered state STOPPED 2016-08-20 21:57:15,072 INFO [main] event.AsyncDispatcher: AsyncDispatcher is draining to stop, igonring any new events. 2016-08-20 21:57:15,074 DEBUG [main] service.CompositeService: Stopping service #3: Service AMLivelinessMonitor in state AMLivelinessMonitor: STARTED 2016-08-20 21:57:15,074 DEBUG [main] service.AbstractService: Service: AMLivelinessMonitor entered state STOPPED 2016-08-20 21:57:15,074 INFO [Ping Checker] util.AbstractLivelinessMonitor: AMLivelinessMonitor thread interrupted 2016-08-20 21:57:15,074 DEBUG [main] service.CompositeService: Stopping service #2: Service AMLivelinessMonitor in state AMLivelinessMonitor: STARTED 2016-08-20 21:57:15,075 DEBUG [main] service.AbstractService: Service: AMLivelinessMonitor entered state STOPPED 2016-08-20 21:57:15,075 DEBUG [main] service.CompositeService: Stopping service #1: Service org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.ContainerAllocationExpirer in state org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.ContainerAllocationExpirer: STARTED 2016-08-20 21:57:15,075 INFO [Ping Checker] util.AbstractLivelinessMonitor: AMLivelinessMonitor thread interrupted 2016-08-20 21:57:15,075 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.ContainerAllocationExpirer entered state STOPPED 2016-08-20 21:57:15,076 DEBUG [main] service.CompositeService: Stopping service #0: Service org.apache.hadoop.yarn.server.resourcemanager.RMSecretManagerService in state org.apache.hadoop.yarn.server.resourcemanager.RMSecretManagerService: STARTED 2016-08-20 21:57:15,076 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.resourcemanager.RMSecretManagerService entered state STOPPED 2016-08-20 21:57:15,076 DEBUG [main] delegation.AbstractDelegationTokenSecretManager: Stopping expired delegation token remover thread 2016-08-20 21:57:15,076 INFO [Ping Checker] util.AbstractLivelinessMonitor: org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.ContainerAllocationExpirer thread interrupted 2016-08-20 21:57:15,076 ERROR [Thread[Thread-33,5,main]] delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2016-08-20 21:57:15,077 DEBUG [main] impl.MetricsSystemImpl: refCount=1 2016-08-20 21:57:15,077 INFO [main] impl.MetricsSystemImpl: Stopping NodeManager metrics system... 2016-08-20 21:57:15,078 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source JvmMetrics: class=class org.apache.hadoop.metrics2.source.JvmMetrics 2016-08-20 21:57:15,078 DEBUG [main] util.MBeans: Unregistering Hadoop:service=ResourceManager,name=JvmMetrics 2016-08-20 21:57:15,078 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source NodeManagerMetrics: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,078 DEBUG [main] util.MBeans: Unregistering Hadoop:service=ResourceManager,name=NodeManagerMetrics 2016-08-20 21:57:15,078 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source UgiMetrics: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,078 DEBUG [main] util.MBeans: Unregistering Hadoop:service=ResourceManager,name=UgiMetrics 2016-08-20 21:57:15,078 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source JvmMetrics-1: class=class org.apache.hadoop.metrics2.source.JvmMetrics 2016-08-20 21:57:15,078 DEBUG [main] util.MBeans: Unregistering Hadoop:service=ResourceManager,name=JvmMetrics-1 2016-08-20 21:57:15,080 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source QueueMetrics,q0=root: class=class org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueMetrics 2016-08-20 21:57:15,080 DEBUG [main] util.MBeans: Unregistering Hadoop:service=ResourceManager,name=QueueMetrics,q0=root 2016-08-20 21:57:15,080 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source QueueMetrics,q0=root,q1=default: class=class org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueMetrics 2016-08-20 21:57:15,080 DEBUG [main] util.MBeans: Unregistering Hadoop:service=ResourceManager,name=QueueMetrics,q0=root,q1=default 2016-08-20 21:57:15,080 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcActivityForPort45325: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,080 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcActivityForPort45325 2016-08-20 21:57:15,080 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcDetailedActivityForPort45325: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,080 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcDetailedActivityForPort45325 2016-08-20 21:57:15,080 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcActivityForPort37347: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,080 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcActivityForPort37347 2016-08-20 21:57:15,081 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcDetailedActivityForPort37347: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,081 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcDetailedActivityForPort37347 2016-08-20 21:57:15,081 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcActivityForPort44193: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,081 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcActivityForPort44193 2016-08-20 21:57:15,081 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcDetailedActivityForPort44193: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,081 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcDetailedActivityForPort44193 2016-08-20 21:57:15,081 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcActivityForPort36239: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,081 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcActivityForPort36239 2016-08-20 21:57:15,081 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcDetailedActivityForPort36239: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,081 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcDetailedActivityForPort36239 2016-08-20 21:57:15,081 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcActivityForPort36489: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,081 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcActivityForPort36489 2016-08-20 21:57:15,081 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcDetailedActivityForPort36489: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,082 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcDetailedActivityForPort36489 2016-08-20 21:57:15,082 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcActivityForPort35029: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,082 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcActivityForPort35029 2016-08-20 21:57:15,082 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcDetailedActivityForPort35029: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,082 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcDetailedActivityForPort35029 2016-08-20 21:57:15,082 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source ClusterMetrics: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,082 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=ClusterMetrics 2016-08-20 21:57:15,082 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcActivityForPort43931: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,082 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcActivityForPort43931 2016-08-20 21:57:15,083 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcDetailedActivityForPort43931: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,083 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcDetailedActivityForPort43931 2016-08-20 21:57:15,083 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcActivityForPort33955: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,083 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcActivityForPort33955 2016-08-20 21:57:15,083 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcDetailedActivityForPort33955: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,083 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcDetailedActivityForPort33955 2016-08-20 21:57:15,083 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcActivityForPort46239: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,083 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcActivityForPort46239 2016-08-20 21:57:15,083 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcDetailedActivityForPort46239: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,083 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcDetailedActivityForPort46239 2016-08-20 21:57:15,083 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcActivityForPort45915: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,083 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcActivityForPort45915 2016-08-20 21:57:15,084 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source RpcDetailedActivityForPort45915: class=class org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1 2016-08-20 21:57:15,084 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=RpcDetailedActivityForPort45915 2016-08-20 21:57:15,084 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source ContainerResource_container_1471710419543_0001_01_000001: class=class org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics 2016-08-20 21:57:15,084 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=ContainerResource_container_1471710419543_0001_01_000001 2016-08-20 21:57:15,084 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source ContainerResource_container_1471710419543_0001_01_000002: class=class org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics 2016-08-20 21:57:15,084 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=ContainerResource_container_1471710419543_0001_01_000002 2016-08-20 21:57:15,084 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source ContainerResource_container_1471710419543_0001_01_000003: class=class org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics 2016-08-20 21:57:15,085 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=ContainerResource_container_1471710419543_0001_01_000003 2016-08-20 21:57:15,085 DEBUG [main] impl.MetricsSystemImpl: Stopping metrics source ContainerResource_container_1471710419543_0001_01_000004: class=class org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics 2016-08-20 21:57:15,085 DEBUG [main] util.MBeans: Unregistering Hadoop:service=NodeManager,name=ContainerResource_container_1471710419543_0001_01_000004 2016-08-20 21:57:15,085 DEBUG [main] util.MBeans: Unregistering Hadoop:service=ResourceManager,name=MetricsSystem,sub=Stats 2016-08-20 21:57:15,085 INFO [main] impl.MetricsSystemImpl: NodeManager metrics system stopped. 2016-08-20 21:57:15,086 DEBUG [main] util.MBeans: Unregistering Hadoop:service=ResourceManager,name=MetricsSystem,sub=Control 2016-08-20 21:57:15,086 INFO [main] impl.MetricsSystemImpl: NodeManager metrics system shutdown complete. 2016-08-20 21:57:15,086 DEBUG [main] service.AbstractService: Service: org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore entered state STOPPED 2016-08-20 21:57:15,086 DEBUG [main] service.AbstractService: Service: Dispatcher entered state STOPPED 2016-08-20 21:57:15,086 INFO [main] event.AsyncDispatcher: AsyncDispatcher is draining to stop, igonring any new events. 2016-08-20 21:57:15,087 INFO [main] resourcemanager.ResourceManager: Transitioned to standby state 2016-08-20 21:57:15,093 DEBUG [Thread-347] util.ShutdownHookManager: ShutdownHookManger complete shutdown.