2014-07-23 16:54:25,374 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114665374, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 16:54:28,375 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114668375, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 16:54:29,474 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:54:29,474 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:54:31,375 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114671375, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 16:54:34,376 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114674376, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 16:54:37,377 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114677377, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 16:54:39,475 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:54:39,475 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:54:40,377 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114680377, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 16:54:43,378 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114683378, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 16:54:46,379 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114686379, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 16:54:49,380 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114689380, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 16:54:49,476 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:54:49,476 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:54:52,380 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114692380, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 16:54:55,381 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114695381, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 16:54:58,382 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114698382, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 16:54:59,477 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:54:59,477 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:55:01,382 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114701382, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:04,383 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114704383, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:04,551 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: Node not found resyncing HOST-10-18-40-84:45026 2014-07-23 16:55:05,574 INFO org.apache.hadoop.yarn.util.RackResolver: Resolved HOST-10-18-40-84 to /default-rack 2014-07-23 16:55:05,578 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: NodeManager from node HOST-10-18-40-84(cmPort: 45026 httpPort: 45025) registered with capability: , assigned nodeId HOST-10-18-40-84:45026 2014-07-23 16:55:05,579 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: HOST-10-18-40-84:45026 Node Transitioned from NEW to RUNNING 2014-07-23 16:55:05,581 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added node HOST-10-18-40-84:45026 clusterResource: 2014-07-23 16:55:07,384 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114707384, a, 0, 0, 0, 0, 8192, 6, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 2048, 1, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:07,885 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: Node not found resyncing HOST-10-18-40-95:45026 2014-07-23 16:55:08,888 INFO org.apache.hadoop.yarn.util.RackResolver: Resolved HOST-10-18-40-95 to /default-rack 2014-07-23 16:55:08,888 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: NodeManager from node HOST-10-18-40-95(cmPort: 45026 httpPort: 45025) registered with capability: , assigned nodeId HOST-10-18-40-95:45026 2014-07-23 16:55:08,889 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: HOST-10-18-40-95:45026 Node Transitioned from NEW to RUNNING 2014-07-23 16:55:08,890 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added node HOST-10-18-40-95:45026 clusterResource: 2014-07-23 16:55:09,478 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:55:09,479 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:55:10,385 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114710385, a, 0, 0, 0, 0, 16384, 12, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 4096, 3, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:13,385 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114713385, a, 0, 0, 0, 0, 16384, 12, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 4096, 3, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:16,386 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114716386, a, 0, 0, 0, 0, 16384, 12, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 4096, 3, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:19,387 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114719387, a, 0, 0, 0, 0, 16384, 12, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 4096, 3, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:19,479 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:55:19,480 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:55:22,388 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114722388, a, 0, 0, 0, 0, 16384, 12, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 4096, 3, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:25,389 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114725389, a, 0, 0, 0, 0, 16384, 12, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 4096, 3, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:28,389 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114728389, a, 0, 0, 0, 0, 16384, 12, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 4096, 3, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:28,483 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: Node not found resyncing HOST-10-18-40-26:45026 2014-07-23 16:55:29,481 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:55:29,481 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:55:29,486 INFO org.apache.hadoop.yarn.util.RackResolver: Resolved HOST-10-18-40-26 to /default-rack 2014-07-23 16:55:29,486 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: NodeManager from node HOST-10-18-40-26(cmPort: 45026 httpPort: 45025) registered with capability: , assigned nodeId HOST-10-18-40-26:45026 2014-07-23 16:55:29,487 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: HOST-10-18-40-26:45026 Node Transitioned from NEW to RUNNING 2014-07-23 16:55:29,488 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added node HOST-10-18-40-26:45026 clusterResource: 2014-07-23 16:55:31,390 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114731390, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:34,391 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114734391, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:37,392 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114737392, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:39,482 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:55:39,482 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:55:40,392 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114740392, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:43,393 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114743393, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:46,394 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114746394, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:49,395 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114749395, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:49,483 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:55:49,483 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:55:52,396 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114752396, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:55,396 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114755396, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:58,397 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114758397, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 16:55:59,484 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:55:59,484 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:56:01,398 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114761398, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 16:56:04,399 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114764398, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 16:56:07,399 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114767399, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 16:56:09,484 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:56:09,485 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 16:56:10,400 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114770400, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 16:56:13,401 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114773401, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 16:56:16,401 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406114776401, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:01,646 INFO org.apache.hadoop.ha.ActiveStandbyElector: Checking for any old active which needs to be fenced... 2014-07-23 17:00:01,648 INFO org.apache.hadoop.ha.ActiveStandbyElector: No old node to fence 2014-07-23 17:00:01,648 INFO org.apache.hadoop.ha.ActiveStandbyElector: Writing znode /yarn/yarn-cluster/ActiveBreadCrumb to indicate that the local node is the most recent active... 2014-07-23 17:00:01,663 INFO org.apache.hadoop.conf.Configuration: found resource yarn-site.xml at file:/home/testos/july21/hadoop/etc/hadoop/yarn-site.xml 2014-07-23 17:00:01,666 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=refreshAdminAcls TARGET=AdminService RESULT=SUCCESS 2014-07-23 17:00:01,666 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning to active state 2014-07-23 17:00:01,669 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=10.18.40.95:11578,10.18.40.84:11578,10.18.40.26:11578 sessionTimeout=10000 watcher=null 2014-07-23 17:00:01,670 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server 10.18.40.84/10.18.40.84:11578. Will not attempt to authenticate using SASL (java.lang.SecurityException: Unable to locate a login configuration) 2014-07-23 17:00:01,671 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to 10.18.40.84/10.18.40.84:11578, initiating session 2014-07-23 17:00:01,674 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Created new ZK connection 2014-07-23 17:00:01,680 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server 10.18.40.84/10.18.40.84:11578, sessionid = 0x24762bf7b6d0003, negotiated timeout = 10000 2014-07-23 17:00:01,771 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Fencing node /rmstore/ZKRMStateRoot/RM_ZK_FENCING_LOCK doesn't exist to delete 2014-07-23 17:00:01,871 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Watcher event type: None with state:SyncConnected for path:null for Service org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore in state org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: STARTED 2014-07-23 17:00:01,872 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: ZKRMStateStore Session connected 2014-07-23 17:00:01,874 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Loaded RM state version info 1.0 2014-07-23 17:00:02,060 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,089 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,092 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,095 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,097 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,101 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,103 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,106 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,108 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,110 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,113 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,114 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,117 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,119 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,121 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,123 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,125 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,127 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,129 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,131 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,133 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,135 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,137 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,138 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,140 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,142 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,143 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,145 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,147 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,151 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,153 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,155 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,158 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,159 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,161 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,163 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,165 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,168 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,169 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,171 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,173 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,174 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,176 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,180 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,182 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,183 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,185 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,187 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,191 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,194 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,196 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,198 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,200 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,202 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,203 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,205 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,207 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,209 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,212 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,215 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,221 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,226 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,229 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,231 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,233 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,235 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,237 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,240 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,244 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,247 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,249 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,251 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,253 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,255 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,257 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,258 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,260 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,262 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,264 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,266 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,269 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,271 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,273 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,275 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,277 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,280 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,282 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,284 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,286 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,289 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,291 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,293 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,295 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,298 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,300 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,302 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,305 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,308 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,310 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,312 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,315 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,317 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,320 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,322 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,324 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,326 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,328 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,329 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,331 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,333 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,335 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,336 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,338 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,340 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,342 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,344 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,347 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,349 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,353 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,355 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,357 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,358 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,364 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,367 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,368 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,370 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,371 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,373 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,374 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,376 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,378 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,379 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,381 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,382 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,384 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,387 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,388 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,390 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,392 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,394 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,395 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,397 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,402 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,405 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,407 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,409 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,411 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,413 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,422 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,424 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,426 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,428 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,431 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,450 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,451 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,453 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,455 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,456 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,458 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,459 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,461 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,463 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,466 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,467 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Done Loading applications from ZK state store 2014-07-23 17:00:02,468 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: recovering RMDelegationTokenSecretManager. 2014-07-23 17:00:02,468 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Recovering 164 applications 2014-07-23 17:00:02,508 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406002968974_0001 2014-07-23 17:00:02,530 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406002968974_0001 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,532 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406002968974_0001_000001 with final state: FINISHED 2014-07-23 17:00:02,541 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406002968974_0001_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,548 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406002968974_0001 State change from NEW to FINISHED 2014-07-23 17:00:02,548 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406030008028_0001 2014-07-23 17:00:02,548 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406002968974_0001 2014-07-23 17:00:02,549 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406030008028_0001 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,549 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406030008028_0001_000001 with final state: FAILED 2014-07-23 17:00:02,550 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406002968974_0001,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406002968974_0001/jobhistory/job/job_1406002968974_0001,appMasterHost=N/A,startTime=1406003436123,finishTime=1406003471149,finalStatus=SUCCEEDED 2014-07-23 17:00:02,553 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406030008028_0001_000002 with final state: FINISHED 2014-07-23 17:00:02,553 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406030008028_0001_000001 State change from NEW to FAILED 2014-07-23 17:00:02,553 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406030008028_0001_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,553 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406030008028_0001 State change from NEW to FINISHED 2014-07-23 17:00:02,554 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406030008028_0002 2014-07-23 17:00:02,554 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406030008028_0002 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,554 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406030008028_0002_000001 with final state: FAILED 2014-07-23 17:00:02,555 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406030008028_0002_000002 with final state: FINISHED 2014-07-23 17:00:02,555 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406030008028_0001 2014-07-23 17:00:02,555 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406030008028_0002_000001 State change from NEW to FAILED 2014-07-23 17:00:02,555 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406030008028_0002_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,555 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406030008028_0002 State change from NEW to FINISHED 2014-07-23 17:00:02,555 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406030008028_0001,name=Sleep job,user=testos,queue=a,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406030008028_0001/jobhistory/job/job_1406030008028_0001,appMasterHost=N/A,startTime=1406030101083,finishTime=1406030836764,finalStatus=SUCCEEDED 2014-07-23 17:00:02,555 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406106960130_0001 2014-07-23 17:00:02,556 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406030008028_0002 2014-07-23 17:00:02,556 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406106960130_0001 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,557 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406106960130_0001_000001 with final state: KILLED 2014-07-23 17:00:02,557 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406106960130_0001_000001 State change from NEW to KILLED 2014-07-23 17:00:02,558 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406030008028_0002,name=Sleep job,user=testos,queue=c,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406030008028_0002/jobhistory/job/job_1406030008028_0002,appMasterHost=N/A,startTime=1406030101083,finishTime=1406030829455,finalStatus=SUCCEEDED 2014-07-23 17:00:02,559 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406106960130_0001 State change from NEW to KILLED 2014-07-23 17:00:02,559 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406007392326_0001 2014-07-23 17:00:02,559 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406007392326_0001 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,559 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406007392326_0001_000001 with final state: FAILED 2014-07-23 17:00:02,559 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406007392326_0001_000002 with final state: FINISHED 2014-07-23 17:00:02,560 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406007392326_0001_000001 State change from NEW to FAILED 2014-07-23 17:00:02,560 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406007392326_0001_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,560 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406007392326_0001 State change from NEW to FINISHED 2014-07-23 17:00:02,560 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0001 2014-07-23 17:00:02,560 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0001 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,560 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0001_000001 with final state: KILLED 2014-07-23 17:00:02,561 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0001_000001 State change from NEW to KILLED 2014-07-23 17:00:02,561 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0001 State change from NEW to KILLED 2014-07-23 17:00:02,561 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0003 2014-07-23 17:00:02,561 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0003 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,561 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0003_000001 with final state: FINISHED 2014-07-23 17:00:02,561 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0003_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,561 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0003 State change from NEW to FINISHED 2014-07-23 17:00:02,561 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0002 2014-07-23 17:00:02,562 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0002 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,562 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0002_000001 with final state: KILLED 2014-07-23 17:00:02,562 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0002_000001 State change from NEW to KILLED 2014-07-23 17:00:02,562 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0002 State change from NEW to KILLED 2014-07-23 17:00:02,562 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0005 2014-07-23 17:00:02,562 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0005 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,562 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0005_000001 with final state: FINISHED 2014-07-23 17:00:02,563 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0005_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,563 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0005 State change from NEW to FINISHED 2014-07-23 17:00:02,563 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0004 2014-07-23 17:00:02,563 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0004 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,563 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0004_000001 with final state: FINISHED 2014-07-23 17:00:02,563 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0004_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,563 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0004 State change from NEW to FINISHED 2014-07-23 17:00:02,564 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0007 2014-07-23 17:00:02,564 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0007 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,564 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0007_000001 with final state: FINISHED 2014-07-23 17:00:02,564 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0007_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,564 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0007 State change from NEW to FINISHED 2014-07-23 17:00:02,564 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0006 2014-07-23 17:00:02,566 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406106960130_0001 2014-07-23 17:00:02,566 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0006 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,567 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406106960130_0001,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406106960130_0001/,appMasterHost=N/A,startTime=1406106974294,finishTime=1406106991308,finalStatus=KILLED 2014-07-23 17:00:02,567 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0006_000001 with final state: KILLED 2014-07-23 17:00:02,567 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406007392326_0001 2014-07-23 17:00:02,567 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406007392326_0001,name=word count,user=testos,queue=a,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406007392326_0001/jobhistory/job/job_1406007392326_0001,appMasterHost=N/A,startTime=1406007497350,finishTime=1406007589729,finalStatus=SUCCEEDED 2014-07-23 17:00:02,567 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0006_000001 State change from NEW to KILLED 2014-07-23 17:00:02,568 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0006 State change from NEW to KILLED 2014-07-23 17:00:02,568 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406007392326_0002 2014-07-23 17:00:02,568 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0001 2014-07-23 17:00:02,568 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0001,name=Sleep job,user=testos,queue=a2,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0001/,appMasterHost=N/A,startTime=1406035329680,finishTime=1406035529203,finalStatus=KILLED 2014-07-23 17:00:02,568 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0003 2014-07-23 17:00:02,569 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0003,name=Sleep job,user=testos,queue=a1,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0003/jobhistory/job/job_1406035038624_0003,appMasterHost=N/A,startTime=1406035600326,finishTime=1406036428057,finalStatus=SUCCEEDED 2014-07-23 17:00:02,569 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406007392326_0002 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,569 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406007392326_0002_000001 with final state: FAILED 2014-07-23 17:00:02,570 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406007392326_0002_000002 with final state: FINISHED 2014-07-23 17:00:02,570 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406007392326_0002_000001 State change from NEW to FAILED 2014-07-23 17:00:02,570 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406007392326_0002_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,570 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406007392326_0002 State change from NEW to FINISHED 2014-07-23 17:00:02,570 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0009 2014-07-23 17:00:02,571 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0009 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,571 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0009_000001 with final state: FINISHED 2014-07-23 17:00:02,572 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0002 2014-07-23 17:00:02,572 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0009_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,572 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0009 State change from NEW to FINISHED 2014-07-23 17:00:02,572 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406003865107_0001 2014-07-23 17:00:02,572 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0002,name=Sleep job,user=testos,queue=a2,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0002/,appMasterHost=N/A,startTime=1406035574827,finishTime=1406035839173,finalStatus=KILLED 2014-07-23 17:00:02,573 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0005 2014-07-23 17:00:02,573 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406003865107_0001 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,573 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0005,name=Sleep job,user=testos,queue=b,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0005/jobhistory/job/job_1406035038624_0005,appMasterHost=N/A,startTime=1406036479388,finishTime=1406036974145,finalStatus=SUCCEEDED 2014-07-23 17:00:02,573 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406003865107_0001_000001 with final state: FAILED 2014-07-23 17:00:02,581 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406003865107_0001_000002 with final state: FINISHED 2014-07-23 17:00:02,581 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406003865107_0001_000001 State change from NEW to FAILED 2014-07-23 17:00:02,581 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406003865107_0001_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,581 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406003865107_0001 State change from NEW to FINISHED 2014-07-23 17:00:02,582 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0008 2014-07-23 17:00:02,583 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0008 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,583 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0008_000001 with final state: FINISHED 2014-07-23 17:00:02,583 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0008_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,584 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0008 State change from NEW to FINISHED 2014-07-23 17:00:02,584 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406003865107_0002 2014-07-23 17:00:02,584 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406003865107_0002 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,585 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406003865107_0002_000001 with final state: FAILED 2014-07-23 17:00:02,585 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406003865107_0002_000002 with final state: FINISHED 2014-07-23 17:00:02,586 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406003865107_0002_000001 State change from NEW to FAILED 2014-07-23 17:00:02,586 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406003865107_0002_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,586 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406003865107_0002 State change from NEW to FINISHED 2014-07-23 17:00:02,586 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0011 2014-07-23 17:00:02,586 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0011 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,587 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0011_000001 with final state: KILLED 2014-07-23 17:00:02,587 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0011_000001 State change from NEW to KILLED 2014-07-23 17:00:02,587 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0011 State change from NEW to KILLED 2014-07-23 17:00:02,587 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0010 2014-07-23 17:00:02,588 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0010 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,588 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0010_000001 with final state: FINISHED 2014-07-23 17:00:02,588 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0010_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,589 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0010 State change from NEW to FINISHED 2014-07-23 17:00:02,589 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406040531743_0002 2014-07-23 17:00:02,589 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406040531743_0002 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,589 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406040531743_0002_000001 with final state: KILLED 2014-07-23 17:00:02,590 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406040531743_0002_000001 State change from NEW to KILLED 2014-07-23 17:00:02,590 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406040531743_0002 State change from NEW to KILLED 2014-07-23 17:00:02,590 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406033267767_0007 2014-07-23 17:00:02,590 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406033267767_0007 with 2 attempts and final state = FAILED 2014-07-23 17:00:02,592 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0004 2014-07-23 17:00:02,592 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406033267767_0007_000001 with final state: FAILED 2014-07-23 17:00:02,592 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0004,name=Sleep job,user=testos,queue=b,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0004/jobhistory/job/job_1406035038624_0004,appMasterHost=N/A,startTime=1406035739124,finishTime=1406036450078,finalStatus=SUCCEEDED 2014-07-23 17:00:02,593 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0007 2014-07-23 17:00:02,593 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406033267767_0007_000002 with final state: FAILED 2014-07-23 17:00:02,593 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0007,name=Sleep job,user=testos,queue=a1,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0007/jobhistory/job/job_1406035038624_0007,appMasterHost=N/A,startTime=1406036479352,finishTime=1406037017011,finalStatus=SUCCEEDED 2014-07-23 17:00:02,593 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0006 2014-07-23 17:00:02,594 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0006,name=Sleep job,user=testos,queue=a2,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0006/,appMasterHost=N/A,startTime=1406036479374,finishTime=1406036571243,finalStatus=KILLED 2014-07-23 17:00:02,594 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406007392326_0002 2014-07-23 17:00:02,594 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406007392326_0002,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406007392326_0002/jobhistory/job/job_1406007392326_0002,appMasterHost=N/A,startTime=1406007497421,finishTime=1406007589945,finalStatus=SUCCEEDED 2014-07-23 17:00:02,595 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0009 2014-07-23 17:00:02,595 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0009,name=Sleep job,user=testos,queue=a1,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0009/jobhistory/job/job_1406035038624_0009,appMasterHost=N/A,startTime=1406037066704,finishTime=1406037372868,finalStatus=SUCCEEDED 2014-07-23 17:00:02,595 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406003865107_0001 2014-07-23 17:00:02,595 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406003865107_0001,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406003865107_0001/jobhistory/job/job_1406003865107_0001,appMasterHost=N/A,startTime=1406004408577,finishTime=1406004487416,finalStatus=SUCCEEDED 2014-07-23 17:00:02,596 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0008 2014-07-23 17:00:02,596 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0008,name=Sleep job,user=testos,queue=b,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0008/jobhistory/job/job_1406035038624_0008,appMasterHost=N/A,startTime=1406037066648,finishTime=1406037358337,finalStatus=SUCCEEDED 2014-07-23 17:00:02,596 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406003865107_0002 2014-07-23 17:00:02,596 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406003865107_0002,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406003865107_0002/jobhistory/job/job_1406003865107_0002,appMasterHost=N/A,startTime=1406004408971,finishTime=1406004501047,finalStatus=FAILED 2014-07-23 17:00:02,597 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406033267767_0007_000001 State change from NEW to FAILED 2014-07-23 17:00:02,597 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0011 2014-07-23 17:00:02,597 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0011,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0011/,appMasterHost=N/A,startTime=1406037472943,finishTime=1406037638791,finalStatus=KILLED 2014-07-23 17:00:02,597 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0010 2014-07-23 17:00:02,597 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406033267767_0007_000002 State change from NEW to FAILED 2014-07-23 17:00:02,597 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406033267767_0007 State change from NEW to FAILED 2014-07-23 17:00:02,597 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406098935068_0006 2014-07-23 17:00:02,597 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0010,name=Sleep job,user=testos,queue=a2,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0010/jobhistory/job/job_1406035038624_0010,appMasterHost=N/A,startTime=1406037066915,finishTime=1406037125580,finalStatus=FAILED 2014-07-23 17:00:02,598 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406040531743_0002 2014-07-23 17:00:02,598 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406040531743_0002,name=Sleep job,user=testos,queue=a,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406040531743_0002/,appMasterHost=N/A,startTime=1406040670073,finishTime=1406040801467,finalStatus=KILLED 2014-07-23 17:00:02,598 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1406033267767_0007 failed 2 times due to Attempt recovered after RM restartAM Container for appattempt_1406033267767_0007_000002 exited with exitCode: 0 due to: .Failing this attempt.. Failing the application. APPID=application_1406033267767_0007 2014-07-23 17:00:02,598 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406033267767_0007,name=Sleep job,user=testos,queue=a2,state=FAILED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406033267767_0007/,appMasterHost=N/A,startTime=1406034617571,finishTime=1406035108511,finalStatus=FAILED 2014-07-23 17:00:02,604 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406098935068_0006 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,604 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406098935068_0006_000001 with final state: FINISHED 2014-07-23 17:00:02,604 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406098935068_0006_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,605 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406098935068_0006 State change from NEW to FINISHED 2014-07-23 17:00:02,605 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406033267767_0008 2014-07-23 17:00:02,606 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406033267767_0008 with 2 attempts and final state = KILLED 2014-07-23 17:00:02,606 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406033267767_0008_000001 with final state: FAILED 2014-07-23 17:00:02,606 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406033267767_0008_000002 with final state: KILLED 2014-07-23 17:00:02,606 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406033267767_0008_000001 State change from NEW to FAILED 2014-07-23 17:00:02,607 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406033267767_0008_000002 State change from NEW to KILLED 2014-07-23 17:00:02,607 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406033267767_0008 State change from NEW to KILLED 2014-07-23 17:00:02,607 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406040531743_0001 2014-07-23 17:00:02,607 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406040531743_0001 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,607 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406040531743_0001_000001 with final state: KILLED 2014-07-23 17:00:02,608 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406040531743_0001_000001 State change from NEW to KILLED 2014-07-23 17:00:02,608 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406040531743_0001 State change from NEW to KILLED 2014-07-23 17:00:02,608 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406098935068_0005 2014-07-23 17:00:02,608 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406098935068_0005 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,608 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406098935068_0005_000001 with final state: KILLED 2014-07-23 17:00:02,609 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406098935068_0005_000001 State change from NEW to KILLED 2014-07-23 17:00:02,609 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406098935068_0005 State change from NEW to KILLED 2014-07-23 17:00:02,609 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406098935068_0004 2014-07-23 17:00:02,609 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406098935068_0004 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,609 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406098935068_0004_000001 with final state: KILLED 2014-07-23 17:00:02,609 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406098935068_0006 2014-07-23 17:00:02,610 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406098935068_0004_000001 State change from NEW to KILLED 2014-07-23 17:00:02,610 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406098935068_0006,name=Sleep job,user=testos,queue=b,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406098935068_0006/jobhistory/job/job_1406098935068_0006,appMasterHost=N/A,startTime=1406105764248,finishTime=1406106376470,finalStatus=SUCCEEDED 2014-07-23 17:00:02,610 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406098935068_0004 State change from NEW to KILLED 2014-07-23 17:00:02,610 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406033267767_0008 2014-07-23 17:00:02,610 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406033267767_0008,name=Sleep job,user=testos,queue=a1,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406033267767_0008/,appMasterHost=N/A,startTime=1406034782837,finishTime=1406035518065,finalStatus=KILLED 2014-07-23 17:00:02,610 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406098935068_0003 2014-07-23 17:00:02,610 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406040531743_0001 2014-07-23 17:00:02,611 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406040531743_0001,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406040531743_0001/,appMasterHost=N/A,startTime=1406040667432,finishTime=1406040795631,finalStatus=KILLED 2014-07-23 17:00:02,611 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406098935068_0003 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,611 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406098935068_0005 2014-07-23 17:00:02,611 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406098935068_0005,name=Sleep job,user=testos,queue=a,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406098935068_0005/,appMasterHost=N/A,startTime=1406099248457,finishTime=1406099296896,finalStatus=KILLED 2014-07-23 17:00:02,611 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406098935068_0004 2014-07-23 17:00:02,612 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406098935068_0004,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406098935068_0004/,appMasterHost=N/A,startTime=1406099222460,finishTime=1406099536912,finalStatus=KILLED 2014-07-23 17:00:02,613 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406098935068_0003_000001 with final state: KILLED 2014-07-23 17:00:02,613 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406098935068_0003_000001 State change from NEW to KILLED 2014-07-23 17:00:02,613 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406098935068_0003 State change from NEW to KILLED 2014-07-23 17:00:02,613 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406040531743_0006 2014-07-23 17:00:02,614 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406040531743_0006 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,616 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406098935068_0003 2014-07-23 17:00:02,617 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406098935068_0003,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406098935068_0003/,appMasterHost=N/A,startTime=1406099051386,finishTime=1406099056901,finalStatus=KILLED 2014-07-23 17:00:02,617 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406040531743_0006_000001 with final state: FINISHED 2014-07-23 17:00:02,617 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406040531743_0006_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,617 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406040531743_0006 State change from NEW to FINISHED 2014-07-23 17:00:02,617 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406098935068_0002 2014-07-23 17:00:02,618 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406098935068_0002 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,618 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406098935068_0002_000001 with final state: KILLED 2014-07-23 17:00:02,618 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406098935068_0002_000001 State change from NEW to KILLED 2014-07-23 17:00:02,618 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406098935068_0002 State change from NEW to KILLED 2014-07-23 17:00:02,619 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406040531743_0005 2014-07-23 17:00:02,619 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406040531743_0005 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,619 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406040531743_0005_000001 with final state: FINISHED 2014-07-23 17:00:02,619 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406040531743_0005_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,620 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406040531743_0005 State change from NEW to FINISHED 2014-07-23 17:00:02,620 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406098935068_0001 2014-07-23 17:00:02,620 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406098935068_0001 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,620 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406098935068_0001_000001 with final state: KILLED 2014-07-23 17:00:02,620 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406098935068_0001_000001 State change from NEW to KILLED 2014-07-23 17:00:02,628 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406098935068_0001 State change from NEW to KILLED 2014-07-23 17:00:02,628 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406040531743_0004 2014-07-23 17:00:02,629 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406040531743_0004 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,629 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406040531743_0004_000001 with final state: FINISHED 2014-07-23 17:00:02,629 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406040531743_0004_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,629 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406040531743_0004 State change from NEW to FINISHED 2014-07-23 17:00:02,629 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406040531743_0003 2014-07-23 17:00:02,630 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406040531743_0003 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,630 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406040531743_0003_000001 with final state: FINISHED 2014-07-23 17:00:02,630 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406040531743_0003_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,630 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406040531743_0003 State change from NEW to FINISHED 2014-07-23 17:00:02,630 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406033267767_0001 2014-07-23 17:00:02,631 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406033267767_0001 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,631 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406033267767_0001_000001 with final state: FINISHED 2014-07-23 17:00:02,631 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406033267767_0001_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,632 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406033267767_0001 State change from NEW to FINISHED 2014-07-23 17:00:02,632 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406033267767_0002 2014-07-23 17:00:02,643 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406040531743_0006 2014-07-23 17:00:02,643 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406033267767_0002 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,644 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406040531743_0006,name=Sleep job,user=testos,queue=b,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406040531743_0006/jobhistory/job/job_1406040531743_0006,appMasterHost=N/A,startTime=1406041374449,finishTime=1406042041685,finalStatus=SUCCEEDED 2014-07-23 17:00:02,644 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406033267767_0002_000001 with final state: FINISHED 2014-07-23 17:00:02,644 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406033267767_0002_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,644 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406033267767_0002 State change from NEW to FINISHED 2014-07-23 17:00:02,644 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406033267767_0003 2014-07-23 17:00:02,645 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406033267767_0003 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,645 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406033267767_0003_000001 with final state: KILLED 2014-07-23 17:00:02,645 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406033267767_0003_000001 State change from NEW to KILLED 2014-07-23 17:00:02,646 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406033267767_0003 State change from NEW to KILLED 2014-07-23 17:00:02,646 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406002968974_0004 2014-07-23 17:00:02,646 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406002968974_0004 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,646 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406002968974_0004_000001 with final state: FAILED 2014-07-23 17:00:02,647 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406002968974_0004_000002 with final state: FINISHED 2014-07-23 17:00:02,647 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406002968974_0004_000001 State change from NEW to FAILED 2014-07-23 17:00:02,647 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406098935068_0002 2014-07-23 17:00:02,647 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406002968974_0004_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,647 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406002968974_0004 State change from NEW to FINISHED 2014-07-23 17:00:02,647 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406098935068_0002,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406098935068_0002/,appMasterHost=N/A,startTime=1406099006230,finishTime=1406099026886,finalStatus=KILLED 2014-07-23 17:00:02,647 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406033267767_0004 2014-07-23 17:00:02,648 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406040531743_0005 2014-07-23 17:00:02,648 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406040531743_0005,name=Sleep job,user=testos,queue=a,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406040531743_0005/jobhistory/job/job_1406040531743_0005,appMasterHost=N/A,startTime=1406041374343,finishTime=1406041829004,finalStatus=SUCCEEDED 2014-07-23 17:00:02,648 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406033267767_0004 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,648 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406098935068_0001 2014-07-23 17:00:02,648 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406033267767_0004_000001 with final state: KILLED 2014-07-23 17:00:02,648 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406098935068_0001,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406098935068_0001/,appMasterHost=N/A,startTime=1406098943846,finishTime=1406098966941,finalStatus=KILLED 2014-07-23 17:00:02,648 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406040531743_0004 2014-07-23 17:00:02,649 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406033267767_0004_000001 State change from NEW to KILLED 2014-07-23 17:00:02,649 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406040531743_0004,name=Sleep job,user=testos,queue=a,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406040531743_0004/jobhistory/job/job_1406040531743_0004,appMasterHost=N/A,startTime=1406040823983,finishTime=1406041361334,finalStatus=SUCCEEDED 2014-07-23 17:00:02,649 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406033267767_0004 State change from NEW to KILLED 2014-07-23 17:00:02,649 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406040531743_0003 2014-07-23 17:00:02,649 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406033267767_0005 2014-07-23 17:00:02,649 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406040531743_0003,name=Sleep job,user=testos,queue=b,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406040531743_0003/jobhistory/job/job_1406040531743_0003,appMasterHost=N/A,startTime=1406040823927,finishTime=1406041345041,finalStatus=SUCCEEDED 2014-07-23 17:00:02,649 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406033267767_0001 2014-07-23 17:00:02,649 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406033267767_0005 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,649 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406033267767_0001,name=Sleep job,user=testos,queue=a2,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406033267767_0001/jobhistory/job/job_1406033267767_0001,appMasterHost=N/A,startTime=1406033824622,finishTime=1406033922754,finalStatus=SUCCEEDED 2014-07-23 17:00:02,650 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406033267767_0005_000001 with final state: KILLED 2014-07-23 17:00:02,650 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406033267767_0002 2014-07-23 17:00:02,650 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406033267767_0002,name=Sleep job,user=testos,queue=a1,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406033267767_0002/jobhistory/job/job_1406033267767_0002,appMasterHost=N/A,startTime=1406033870081,finishTime=1406034110019,finalStatus=SUCCEEDED 2014-07-23 17:00:02,650 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406033267767_0005_000001 State change from NEW to KILLED 2014-07-23 17:00:02,650 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406033267767_0003 2014-07-23 17:00:02,650 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406033267767_0005 State change from NEW to KILLED 2014-07-23 17:00:02,650 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406002968974_0002 2014-07-23 17:00:02,650 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406033267767_0003,name=Sleep job,user=testos,queue=a2,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406033267767_0003/,appMasterHost=N/A,startTime=1406033938459,finishTime=1406034493271,finalStatus=KILLED 2014-07-23 17:00:02,650 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406002968974_0004 2014-07-23 17:00:02,651 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406002968974_0002 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,651 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406002968974_0004,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406002968974_0004/jobhistory/job/job_1406002968974_0004,appMasterHost=N/A,startTime=1406003642922,finishTime=1406003707877,finalStatus=SUCCEEDED 2014-07-23 17:00:02,651 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406033267767_0004 2014-07-23 17:00:02,651 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406002968974_0002_000001 with final state: FINISHED 2014-07-23 17:00:02,651 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406033267767_0004,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406033267767_0004/,appMasterHost=N/A,startTime=1406033981871,finishTime=1406034591834,finalStatus=KILLED 2014-07-23 17:00:02,651 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406033267767_0005 2014-07-23 17:00:02,651 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406002968974_0002_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,651 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406033267767_0005,name=Sleep job,user=testos,queue=a1,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406033267767_0005/,appMasterHost=N/A,startTime=1406034325037,finishTime=1406034595370,finalStatus=KILLED 2014-07-23 17:00:02,651 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406002968974_0002 State change from NEW to FINISHED 2014-07-23 17:00:02,652 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406002968974_0003 2014-07-23 17:00:02,652 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406002968974_0003 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,652 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406002968974_0003_000001 with final state: FAILED 2014-07-23 17:00:02,653 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406002968974_0003_000002 with final state: FINISHED 2014-07-23 17:00:02,653 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406002968974_0003_000001 State change from NEW to FAILED 2014-07-23 17:00:02,653 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406002968974_0003_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,653 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406002968974_0003 State change from NEW to FINISHED 2014-07-23 17:00:02,653 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406033267767_0006 2014-07-23 17:00:02,654 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406033267767_0006 with 2 attempts and final state = FAILED 2014-07-23 17:00:02,654 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406033267767_0006_000001 with final state: FAILED 2014-07-23 17:00:02,654 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406033267767_0006_000002 with final state: FAILED 2014-07-23 17:00:02,654 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406033267767_0006_000001 State change from NEW to FAILED 2014-07-23 17:00:02,655 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406033267767_0006_000002 State change from NEW to FAILED 2014-07-23 17:00:02,655 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406033267767_0006 State change from NEW to FAILED 2014-07-23 17:00:02,655 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406040253740_0001 2014-07-23 17:00:02,655 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406040253740_0001 with 2 attempts and final state = KILLED 2014-07-23 17:00:02,655 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406040253740_0001_000001 with final state: FAILED 2014-07-23 17:00:02,656 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406040253740_0001_000002 with final state: KILLED 2014-07-23 17:00:02,656 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406040253740_0001_000001 State change from NEW to FAILED 2014-07-23 17:00:02,656 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406040253740_0001_000002 State change from NEW to KILLED 2014-07-23 17:00:02,656 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406040253740_0001 State change from NEW to KILLED 2014-07-23 17:00:02,656 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0011 2014-07-23 17:00:02,657 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0011 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,657 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0011_000001 with final state: FINISHED 2014-07-23 17:00:02,657 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0011_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,657 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406002968974_0002 2014-07-23 17:00:02,657 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0011 State change from NEW to FINISHED 2014-07-23 17:00:02,657 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0010 2014-07-23 17:00:02,657 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406002968974_0002,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406002968974_0002/jobhistory/job/job_1406002968974_0002,appMasterHost=N/A,startTime=1406003439915,finishTime=1406003474535,finalStatus=SUCCEEDED 2014-07-23 17:00:02,658 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406002968974_0003 2014-07-23 17:00:02,658 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0010 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,658 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406002968974_0003,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406002968974_0003/jobhistory/job/job_1406002968974_0003,appMasterHost=N/A,startTime=1406003642441,finishTime=1406003715940,finalStatus=SUCCEEDED 2014-07-23 17:00:02,658 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0010_000001 with final state: FINISHED 2014-07-23 17:00:02,658 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1406033267767_0006 failed 2 times due to Attempt recovered after RM restartAM Container for appattempt_1406033267767_0006_000002 exited with exitCode: 0 due to: .Failing this attempt.. Failing the application. APPID=application_1406033267767_0006 2014-07-23 17:00:02,658 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406033267767_0006,name=Sleep job,user=testos,queue=b,state=FAILED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406033267767_0006/,appMasterHost=N/A,startTime=1406034617523,finishTime=1406035108491,finalStatus=FAILED 2014-07-23 17:00:02,658 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0010_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,658 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406040253740_0001 2014-07-23 17:00:02,659 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0010 State change from NEW to FINISHED 2014-07-23 17:00:02,659 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406040253740_0001,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406040253740_0001/,appMasterHost=N/A,startTime=1406040296221,finishTime=1406040654304,finalStatus=KILLED 2014-07-23 17:00:02,659 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0009 2014-07-23 17:00:02,659 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0011 2014-07-23 17:00:02,659 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0011,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0011/jobhistory/job/job_1405935231196_0011,appMasterHost=N/A,startTime=1405939009480,finishTime=1405939050516,finalStatus=SUCCEEDED 2014-07-23 17:00:02,659 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0009 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,659 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0010 2014-07-23 17:00:02,659 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0009_000001 with final state: FINISHED 2014-07-23 17:00:02,659 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0010,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0010/jobhistory/job/job_1405935231196_0010,appMasterHost=N/A,startTime=1405938993250,finishTime=1405939032446,finalStatus=SUCCEEDED 2014-07-23 17:00:02,660 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0009_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,660 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0009 State change from NEW to FINISHED 2014-07-23 17:00:02,660 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0008 2014-07-23 17:00:02,660 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0008 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,660 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0008_000001 with final state: FINISHED 2014-07-23 17:00:02,661 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0008_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,661 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0008 State change from NEW to FINISHED 2014-07-23 17:00:02,661 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0015 2014-07-23 17:00:02,661 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0015 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,661 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0015_000001 with final state: FINISHED 2014-07-23 17:00:02,662 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0015_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,662 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0015 State change from NEW to FINISHED 2014-07-23 17:00:02,662 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0014 2014-07-23 17:00:02,662 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0014 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,662 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0014_000001 with final state: FINISHED 2014-07-23 17:00:02,663 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0014_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,663 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0014 State change from NEW to FINISHED 2014-07-23 17:00:02,663 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0013 2014-07-23 17:00:02,663 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0013 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,663 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0013_000001 with final state: FINISHED 2014-07-23 17:00:02,664 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0013_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,664 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0013 State change from NEW to FINISHED 2014-07-23 17:00:02,664 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0012 2014-07-23 17:00:02,664 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0012 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,664 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0012_000001 with final state: FINISHED 2014-07-23 17:00:02,665 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0012_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,665 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0012 State change from NEW to FINISHED 2014-07-23 17:00:02,665 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0019 2014-07-23 17:00:02,665 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0019 with 2 attempts and final state = FAILED 2014-07-23 17:00:02,665 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0019_000001 with final state: FAILED 2014-07-23 17:00:02,666 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0019_000002 with final state: FAILED 2014-07-23 17:00:02,666 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0019_000001 State change from NEW to FAILED 2014-07-23 17:00:02,666 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0019_000002 State change from NEW to FAILED 2014-07-23 17:00:02,666 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0019 State change from NEW to FAILED 2014-07-23 17:00:02,666 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0018 2014-07-23 17:00:02,667 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0018 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,667 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0018_000001 with final state: FINISHED 2014-07-23 17:00:02,667 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0018_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,667 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0018 State change from NEW to FINISHED 2014-07-23 17:00:02,667 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0017 2014-07-23 17:00:02,668 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0017 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,668 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0017_000001 with final state: FINISHED 2014-07-23 17:00:02,668 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0017_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,668 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0017 State change from NEW to FINISHED 2014-07-23 17:00:02,669 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0016 2014-07-23 17:00:02,669 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0016 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,669 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0016_000001 with final state: FINISHED 2014-07-23 17:00:02,669 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0016_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,670 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0016 State change from NEW to FINISHED 2014-07-23 17:00:02,670 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0023 2014-07-23 17:00:02,670 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0023 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,670 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0023_000001 with final state: FINISHED 2014-07-23 17:00:02,670 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0023_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,671 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0023 State change from NEW to FINISHED 2014-07-23 17:00:02,671 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0022 2014-07-23 17:00:02,671 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0022 with 2 attempts and final state = FAILED 2014-07-23 17:00:02,671 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0022_000001 with final state: FAILED 2014-07-23 17:00:02,672 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0009 2014-07-23 17:00:02,672 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0009,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0009/jobhistory/job/job_1405935231196_0009,appMasterHost=N/A,startTime=1405938986170,finishTime=1405939011664,finalStatus=SUCCEEDED 2014-07-23 17:00:02,672 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0008 2014-07-23 17:00:02,672 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0008,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0008/jobhistory/job/job_1405935231196_0008,appMasterHost=N/A,startTime=1405938762441,finishTime=1405938787238,finalStatus=SUCCEEDED 2014-07-23 17:00:02,672 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0015 2014-07-23 17:00:02,673 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0015,name=TeraGen,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0015/jobhistory/job/job_1405935231196_0015,appMasterHost=N/A,startTime=1405940414601,finishTime=1405940435611,finalStatus=SUCCEEDED 2014-07-23 17:00:02,673 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0014 2014-07-23 17:00:02,673 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0014,name=TeraGen,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0014/jobhistory/job/job_1405935231196_0014,appMasterHost=N/A,startTime=1405940372053,finishTime=1405940382086,finalStatus=SUCCEEDED 2014-07-23 17:00:02,673 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0013 2014-07-23 17:00:02,673 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0022_000002 with final state: FAILED 2014-07-23 17:00:02,673 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0013,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0013/jobhistory/job/job_1405935231196_0013,appMasterHost=N/A,startTime=1405939272793,finishTime=1405939293290,finalStatus=FAILED 2014-07-23 17:00:02,673 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0012 2014-07-23 17:00:02,674 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0012,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0012/jobhistory/job/job_1405935231196_0012,appMasterHost=N/A,startTime=1405939272560,finishTime=1405939312640,finalStatus=FAILED 2014-07-23 17:00:02,674 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0022_000001 State change from NEW to FAILED 2014-07-23 17:00:02,674 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1405935231196_0019 failed 2 times due to AM Container for appattempt_1405935231196_0019_000002 exited with exitCode: 143 due to: Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 Killed by external signal .Failing this attempt.. Failing the application. APPID=application_1405935231196_0019 2014-07-23 17:00:02,674 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0022_000002 State change from NEW to FAILED 2014-07-23 17:00:02,674 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0022 State change from NEW to FAILED 2014-07-23 17:00:02,674 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0019,name=TeraGen,user=testos,queue=default,state=FAILED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0019/,appMasterHost=N/A,startTime=1405940639280,finishTime=1405941168020,finalStatus=FAILED 2014-07-23 17:00:02,674 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406114813957_0002 2014-07-23 17:00:02,676 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406114813957_0002 with 1 attempts and final state = null 2014-07-23 17:00:02,676 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406114813957_0002_000001 with final state: null 2014-07-23 17:00:02,676 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0018 2014-07-23 17:00:02,735 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0018,name=TeraGen,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0018/jobhistory/job/job_1405935231196_0018,appMasterHost=N/A,startTime=1405940622140,finishTime=1405940850555,finalStatus=SUCCEEDED 2014-07-23 17:00:02,736 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0017 2014-07-23 17:00:02,736 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0017,name=TeraGen,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0017/jobhistory/job/job_1405935231196_0017,appMasterHost=N/A,startTime=1405940512429,finishTime=1405940611047,finalStatus=SUCCEEDED 2014-07-23 17:00:02,736 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0016 2014-07-23 17:00:02,736 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0016,name=TeraGen,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0016/jobhistory/job/job_1405935231196_0016,appMasterHost=N/A,startTime=1405940477803,finishTime=1405940497470,finalStatus=SUCCEEDED 2014-07-23 17:00:02,736 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0023 2014-07-23 17:00:02,736 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406114813957_0002_000001 State change from NEW to LAUNCHED 2014-07-23 17:00:02,736 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0023,name=TeraGen,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0023/jobhistory/job/job_1405935231196_0023,appMasterHost=N/A,startTime=1405941287598,finishTime=1405941314349,finalStatus=SUCCEEDED 2014-07-23 17:00:02,737 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1405935231196_0022 failed 2 times due to AM Container for appattempt_1405935231196_0022_000002 exited with exitCode: 143 due to: Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 Killed by external signal .Failing this attempt.. Failing the application. APPID=application_1405935231196_0022 2014-07-23 17:00:02,737 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0022,name=TeraGen,user=testos,queue=default,state=FAILED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0022/,appMasterHost=N/A,startTime=1405941049604,finishTime=1405941217371,finalStatus=FAILED 2014-07-23 17:00:02,739 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406114813957_0002 State change from NEW to ACCEPTED 2014-07-23 17:00:02,739 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0021 2014-07-23 17:00:02,739 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0021 with 2 attempts and final state = FAILED 2014-07-23 17:00:02,740 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0021_000001 with final state: FAILED 2014-07-23 17:00:02,740 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0021_000002 with final state: FAILED 2014-07-23 17:00:02,740 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0021_000001 State change from NEW to FAILED 2014-07-23 17:00:02,740 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0021_000002 State change from NEW to FAILED 2014-07-23 17:00:02,740 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0021 State change from NEW to FAILED 2014-07-23 17:00:02,741 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406114813957_0001 2014-07-23 17:00:02,742 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406114813957_0001 with 0 attempts and final state = FAILED 2014-07-23 17:00:02,742 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406114813957_0001 State change from NEW to FAILED 2014-07-23 17:00:02,742 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0020 2014-07-23 17:00:02,742 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0020 with 2 attempts and final state = FAILED 2014-07-23 17:00:02,743 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0020_000001 with final state: FAILED 2014-07-23 17:00:02,743 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0020_000002 with final state: FAILED 2014-07-23 17:00:02,743 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0020_000001 State change from NEW to FAILED 2014-07-23 17:00:02,743 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0020_000002 State change from NEW to FAILED 2014-07-23 17:00:02,743 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0020 State change from NEW to FAILED 2014-07-23 17:00:02,744 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0026 2014-07-23 17:00:02,744 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0026 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,744 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0026_000001 with final state: FINISHED 2014-07-23 17:00:02,744 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0026_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,745 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0026 State change from NEW to FINISHED 2014-07-23 17:00:02,745 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0027 2014-07-23 17:00:02,745 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0027 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,745 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0027_000001 with final state: FINISHED 2014-07-23 17:00:02,745 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0027_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,745 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0027 State change from NEW to FINISHED 2014-07-23 17:00:02,746 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0024 2014-07-23 17:00:02,746 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0024 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,746 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0024_000001 with final state: FINISHED 2014-07-23 17:00:02,746 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0024_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,746 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0024 State change from NEW to FINISHED 2014-07-23 17:00:02,747 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0025 2014-07-23 17:00:02,747 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0025 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,747 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0025_000001 with final state: FINISHED 2014-07-23 17:00:02,747 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0025_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,747 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0025 State change from NEW to FINISHED 2014-07-23 17:00:02,747 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0030 2014-07-23 17:00:02,741 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1405935231196_0021 failed 2 times due to AM Container for appattempt_1405935231196_0021_000002 exited with exitCode: 143 due to: Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 Killed by external signal .Failing this attempt.. Failing the application. APPID=application_1405935231196_0021 2014-07-23 17:00:02,752 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0021,name=TeraGen,user=testos,queue=default,state=FAILED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0021/,appMasterHost=N/A,startTime=1405940671224,finishTime=1405941167901,finalStatus=FAILED 2014-07-23 17:00:02,752 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1406114813957_0001 submitted by user testos to unknown queue: default APPID=application_1406114813957_0001 2014-07-23 17:00:02,752 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406114813957_0001,name=word count,user=testos,queue=default,state=FAILED,trackingUrl=N/A,appMasterHost=N/A,startTime=1406114955627,finishTime=1406114955749,finalStatus=FAILED 2014-07-23 17:00:02,752 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1405935231196_0020 failed 2 times due to AM Container for appattempt_1405935231196_0020_000002 exited with exitCode: 143 due to: Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 Killed by external signal .Failing this attempt.. Failing the application. APPID=application_1405935231196_0020 2014-07-23 17:00:02,752 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0020,name=TeraGen,user=testos,queue=default,state=FAILED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0020/,appMasterHost=N/A,startTime=1405940657062,finishTime=1405941148475,finalStatus=FAILED 2014-07-23 17:00:02,753 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0026 2014-07-23 17:00:02,753 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0026,name=TeraSort,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0026/jobhistory/job/job_1405935231196_0026,appMasterHost=N/A,startTime=1405941511610,finishTime=1405941560391,finalStatus=SUCCEEDED 2014-07-23 17:00:02,753 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0027 2014-07-23 17:00:02,753 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0027,name=TeraSort,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0027/jobhistory/job/job_1405935231196_0027,appMasterHost=N/A,startTime=1405941511929,finishTime=1405941597450,finalStatus=SUCCEEDED 2014-07-23 17:00:02,753 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0024 2014-07-23 17:00:02,753 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0024,name=TeraGen,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0024/jobhistory/job/job_1405935231196_0024,appMasterHost=N/A,startTime=1405941287654,finishTime=1405941332313,finalStatus=SUCCEEDED 2014-07-23 17:00:02,754 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0025 2014-07-23 17:00:02,754 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0025,name=TeraSort,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0025/jobhistory/job/job_1405935231196_0025,appMasterHost=N/A,startTime=1405941400402,finishTime=1405941471876,finalStatus=SUCCEEDED 2014-07-23 17:00:02,760 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0030 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,760 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0030_000001 with final state: FINISHED 2014-07-23 17:00:02,761 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0030_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,761 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0030 State change from NEW to FINISHED 2014-07-23 17:00:02,761 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0030 2014-07-23 17:00:02,761 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0031 2014-07-23 17:00:02,761 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0030,name=TeraSort,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0030/jobhistory/job/job_1405935231196_0030,appMasterHost=N/A,startTime=1405941865666,finishTime=1405941921082,finalStatus=SUCCEEDED 2014-07-23 17:00:02,761 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0031 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,761 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0031_000001 with final state: FINISHED 2014-07-23 17:00:02,762 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0031_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,762 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0031 State change from NEW to FINISHED 2014-07-23 17:00:02,762 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0028 2014-07-23 17:00:02,762 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0031 2014-07-23 17:00:02,762 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0031,name=TeraSort,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0031/jobhistory/job/job_1405935231196_0031,appMasterHost=N/A,startTime=1405941939843,finishTime=1405941998119,finalStatus=SUCCEEDED 2014-07-23 17:00:02,762 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0028 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,762 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0028_000001 with final state: FINISHED 2014-07-23 17:00:02,763 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0028_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,763 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0028 State change from NEW to FINISHED 2014-07-23 17:00:02,763 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0029 2014-07-23 17:00:02,763 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0028 2014-07-23 17:00:02,763 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0028,name=TeraGen,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0028/jobhistory/job/job_1405935231196_0028,appMasterHost=N/A,startTime=1405941677451,finishTime=1405941690080,finalStatus=SUCCEEDED 2014-07-23 17:00:02,763 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0029 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,763 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0029_000001 with final state: FINISHED 2014-07-23 17:00:02,764 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0029_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,764 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0029 State change from NEW to FINISHED 2014-07-23 17:00:02,764 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0034 2014-07-23 17:00:02,764 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0029 2014-07-23 17:00:02,764 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0029,name=TeraSort,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0029/jobhistory/job/job_1405935231196_0029,appMasterHost=N/A,startTime=1405941729264,finishTime=1405941789279,finalStatus=SUCCEEDED 2014-07-23 17:00:02,764 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0034 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,764 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0034_000001 with final state: FINISHED 2014-07-23 17:00:02,765 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0034_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,765 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0034 State change from NEW to FINISHED 2014-07-23 17:00:02,765 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406019383332_0001 2014-07-23 17:00:02,765 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0034 2014-07-23 17:00:02,765 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0034,name=TeraSort,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0034/jobhistory/job/job_1405935231196_0034,appMasterHost=N/A,startTime=1405942176174,finishTime=1405942275416,finalStatus=SUCCEEDED 2014-07-23 17:00:02,765 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406019383332_0001 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,765 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406019383332_0001_000001 with final state: FAILED 2014-07-23 17:00:02,766 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406019383332_0001_000002 with final state: FINISHED 2014-07-23 17:00:02,766 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406019383332_0001_000001 State change from NEW to FAILED 2014-07-23 17:00:02,766 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406019383332_0001_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,766 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406019383332_0001 State change from NEW to FINISHED 2014-07-23 17:00:02,766 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406019383332_0001 2014-07-23 17:00:02,767 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0035 2014-07-23 17:00:02,767 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406019383332_0001,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406019383332_0001/jobhistory/job/job_1406019383332_0001,appMasterHost=N/A,startTime=1406019548737,finishTime=1406019674447,finalStatus=SUCCEEDED 2014-07-23 17:00:02,767 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0035 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,767 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0035_000001 with final state: FINISHED 2014-07-23 17:00:02,767 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0035_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,768 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0035 State change from NEW to FINISHED 2014-07-23 17:00:02,768 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0035 2014-07-23 17:00:02,768 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0035,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0035/jobhistory/job/job_1405935231196_0035,appMasterHost=N/A,startTime=1405942484423,finishTime=1405942515642,finalStatus=SUCCEEDED 2014-07-23 17:00:02,768 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0032 2014-07-23 17:00:02,776 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0032 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,776 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0032_000001 with final state: FINISHED 2014-07-23 17:00:02,777 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0032_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,777 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0032 State change from NEW to FINISHED 2014-07-23 17:00:02,777 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0032 2014-07-23 17:00:02,777 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0033 2014-07-23 17:00:02,777 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0032,name=TeraSort,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0032/jobhistory/job/job_1405935231196_0032,appMasterHost=N/A,startTime=1405941939935,finishTime=1405942040247,finalStatus=SUCCEEDED 2014-07-23 17:00:02,777 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0033 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,777 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0033_000001 with final state: FINISHED 2014-07-23 17:00:02,778 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0033_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,778 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0033 State change from NEW to FINISHED 2014-07-23 17:00:02,778 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0038 2014-07-23 17:00:02,778 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0033 2014-07-23 17:00:02,778 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0033,name=TeraSort,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0033/jobhistory/job/job_1405935231196_0033,appMasterHost=N/A,startTime=1405942176110,finishTime=1405942238460,finalStatus=SUCCEEDED 2014-07-23 17:00:02,778 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0038 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,778 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0038_000001 with final state: FAILED 2014-07-23 17:00:02,779 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0038_000002 with final state: FINISHED 2014-07-23 17:00:02,779 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0038_000001 State change from NEW to FAILED 2014-07-23 17:00:02,779 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0038_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,779 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0038 State change from NEW to FINISHED 2014-07-23 17:00:02,779 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0036 2014-07-23 17:00:02,779 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0038 2014-07-23 17:00:02,780 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0038,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0038/jobhistory/job/job_1405935231196_0038,appMasterHost=N/A,startTime=1405950806649,finishTime=1405950871729,finalStatus=SUCCEEDED 2014-07-23 17:00:02,780 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0036 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,780 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0036_000001 with final state: FINISHED 2014-07-23 17:00:02,780 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0036_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,780 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0036 State change from NEW to FINISHED 2014-07-23 17:00:02,780 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0037 2014-07-23 17:00:02,780 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0036 2014-07-23 17:00:02,781 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0036,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0036/jobhistory/job/job_1405935231196_0036,appMasterHost=N/A,startTime=1405942484475,finishTime=1405942522126,finalStatus=SUCCEEDED 2014-07-23 17:00:02,781 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0037 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,781 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0037_000001 with final state: FAILED 2014-07-23 17:00:02,781 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0037_000002 with final state: FINISHED 2014-07-23 17:00:02,781 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0037_000001 State change from NEW to FAILED 2014-07-23 17:00:02,782 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0037_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,782 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0037 State change from NEW to FINISHED 2014-07-23 17:00:02,782 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406091608463_0004 2014-07-23 17:00:02,782 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0037 2014-07-23 17:00:02,782 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0037,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0037/jobhistory/job/job_1405935231196_0037,appMasterHost=N/A,startTime=1405950806575,finishTime=1405950879028,finalStatus=SUCCEEDED 2014-07-23 17:00:02,782 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406091608463_0004 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,782 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406091608463_0004_000001 with final state: KILLED 2014-07-23 17:00:02,783 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406091608463_0004_000001 State change from NEW to KILLED 2014-07-23 17:00:02,783 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406091608463_0004 State change from NEW to KILLED 2014-07-23 17:00:02,783 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406091608463_0004 2014-07-23 17:00:02,783 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406091608463_0004,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406091608463_0004/,appMasterHost=N/A,startTime=1406091996374,finishTime=1406091997335,finalStatus=KILLED 2014-07-23 17:00:02,783 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406019596496_0001 2014-07-23 17:00:02,784 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406019596496_0001 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,784 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406019596496_0001_000001 with final state: FINISHED 2014-07-23 17:00:02,784 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406019596496_0001_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,784 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406019596496_0001 State change from NEW to FINISHED 2014-07-23 17:00:02,784 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406019596496_0001 2014-07-23 17:00:02,785 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406019596496_0001,name=Sleep job,user=testos,queue=a,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406019596496_0001/jobhistory/job/job_1406019596496_0001,appMasterHost=N/A,startTime=1406023749687,finishTime=1406024856485,finalStatus=SUCCEEDED 2014-07-23 17:00:02,784 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406091608463_0003 2014-07-23 17:00:02,796 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406091608463_0003 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,796 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406091608463_0003_000001 with final state: KILLED 2014-07-23 17:00:02,796 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406091608463_0003_000001 State change from NEW to KILLED 2014-07-23 17:00:02,796 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406091608463_0003 State change from NEW to KILLED 2014-07-23 17:00:02,796 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406019596496_0002 2014-07-23 17:00:02,796 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406091608463_0003 2014-07-23 17:00:02,797 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406091608463_0003,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406091608463_0003/,appMasterHost=N/A,startTime=1406091899444,finishTime=1406091900332,finalStatus=KILLED 2014-07-23 17:00:02,797 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406019596496_0002 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,797 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406019596496_0002_000001 with final state: FINISHED 2014-07-23 17:00:02,797 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406019596496_0002_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,797 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406019596496_0002 State change from NEW to FINISHED 2014-07-23 17:00:02,798 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406091608463_0005 2014-07-23 17:00:02,798 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406019596496_0002 2014-07-23 17:00:02,798 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406019596496_0002,name=Sleep job,user=testos,queue=c,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406019596496_0002/jobhistory/job/job_1406019596496_0002,appMasterHost=N/A,startTime=1406023749780,finishTime=1406026545921,finalStatus=SUCCEEDED 2014-07-23 17:00:02,798 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406091608463_0005 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,798 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406091608463_0005_000001 with final state: KILLED 2014-07-23 17:00:02,798 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406091608463_0005_000001 State change from NEW to KILLED 2014-07-23 17:00:02,798 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406091608463_0005 State change from NEW to KILLED 2014-07-23 17:00:02,798 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406019596496_0003 2014-07-23 17:00:02,798 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406091608463_0005 2014-07-23 17:00:02,799 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406019596496_0003 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,799 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406091608463_0005,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406091608463_0005/,appMasterHost=N/A,startTime=1406095949213,finishTime=1406096009246,finalStatus=KILLED 2014-07-23 17:00:02,799 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406019596496_0003_000001 with final state: FINISHED 2014-07-23 17:00:02,799 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406019596496_0003_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,799 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406019596496_0003 State change from NEW to FINISHED 2014-07-23 17:00:02,799 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406091608463_0002 2014-07-23 17:00:02,799 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406019596496_0003 2014-07-23 17:00:02,799 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406019596496_0003,name=Sleep job,user=testos,queue=a,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406019596496_0003/jobhistory/job/job_1406019596496_0003,appMasterHost=N/A,startTime=1406025006528,finishTime=1406026432517,finalStatus=SUCCEEDED 2014-07-23 17:00:02,799 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406091608463_0002 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,800 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406091608463_0002_000001 with final state: KILLED 2014-07-23 17:00:02,800 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406091608463_0002_000001 State change from NEW to KILLED 2014-07-23 17:00:02,800 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406091608463_0002 State change from NEW to KILLED 2014-07-23 17:00:02,800 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406091608463_0002 2014-07-23 17:00:02,800 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406091608463_0001 2014-07-23 17:00:02,800 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406091608463_0002,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406091608463_0002/,appMasterHost=N/A,startTime=1406091793368,finishTime=1406091794280,finalStatus=KILLED 2014-07-23 17:00:02,800 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406091608463_0001 with 2 attempts and final state = FAILED 2014-07-23 17:00:02,800 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406091608463_0001_000001 with final state: FAILED 2014-07-23 17:00:02,801 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406091608463_0001_000002 with final state: FAILED 2014-07-23 17:00:02,801 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406091608463_0001_000001 State change from NEW to FAILED 2014-07-23 17:00:02,801 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406091608463_0001_000002 State change from NEW to FAILED 2014-07-23 17:00:02,801 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406091608463_0001 State change from NEW to FAILED 2014-07-23 17:00:02,801 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406096149218_0002 2014-07-23 17:00:02,801 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1406091608463_0001 failed 2 times due to Attempt recovered after RM restartAM Container for appattempt_1406091608463_0001_000002 exited with exitCode: 143 due to: Container Killed by ResourceManager Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 .Failing this attempt.. Failing the application. APPID=application_1406091608463_0001 2014-07-23 17:00:02,801 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406096149218_0002 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,801 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406091608463_0001,name=Sleep job,user=testos,queue=a,state=FAILED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406091608463_0001/,appMasterHost=N/A,startTime=1406091766152,finishTime=1406096004640,finalStatus=FAILED 2014-07-23 17:00:02,802 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406096149218_0002_000001 with final state: KILLED 2014-07-23 17:00:02,802 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406096149218_0002_000001 State change from NEW to KILLED 2014-07-23 17:00:02,802 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406096149218_0002 State change from NEW to KILLED 2014-07-23 17:00:02,802 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406096149218_0002 2014-07-23 17:00:02,802 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406096149218_0001 2014-07-23 17:00:02,802 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406096149218_0002,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406096149218_0002/,appMasterHost=N/A,startTime=1406096926921,finishTime=1406096937081,finalStatus=KILLED 2014-07-23 17:00:02,802 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406096149218_0001 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,803 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406096149218_0001_000001 with final state: KILLED 2014-07-23 17:00:02,803 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406096149218_0001_000001 State change from NEW to KILLED 2014-07-23 17:00:02,803 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406096149218_0001 State change from NEW to KILLED 2014-07-23 17:00:02,803 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406007559025_0004 2014-07-23 17:00:02,803 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406096149218_0001 2014-07-23 17:00:02,803 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406096149218_0001,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406096149218_0001/,appMasterHost=N/A,startTime=1406096599352,finishTime=1406096704947,finalStatus=KILLED 2014-07-23 17:00:02,803 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406007559025_0004 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,803 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406007559025_0004_000001 with final state: FINISHED 2014-07-23 17:00:02,803 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406007559025_0004_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,804 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406007559025_0004 State change from NEW to FINISHED 2014-07-23 17:00:02,804 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406090411375_0003 2014-07-23 17:00:02,804 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406007559025_0004 2014-07-23 17:00:02,804 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406007559025_0004,name=word count,user=testos,queue=a,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406007559025_0004/jobhistory/job/job_1406007559025_0004,appMasterHost=N/A,startTime=1406011115923,finishTime=1406011143603,finalStatus=SUCCEEDED 2014-07-23 17:00:02,804 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406090411375_0003 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,804 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406090411375_0003_000001 with final state: KILLED 2014-07-23 17:00:02,804 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406090411375_0003_000001 State change from NEW to KILLED 2014-07-23 17:00:02,804 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406090411375_0003 State change from NEW to KILLED 2014-07-23 17:00:02,804 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406007559025_0003 2014-07-23 17:00:02,804 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406090411375_0003 2014-07-23 17:00:02,805 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406090411375_0003,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406090411375_0003/,appMasterHost=N/A,startTime=1406090935279,finishTime=1406091073001,finalStatus=KILLED 2014-07-23 17:00:02,805 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406007559025_0003 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,805 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406007559025_0003_000001 with final state: FINISHED 2014-07-23 17:00:02,805 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406007559025_0003_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,805 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406007559025_0003 State change from NEW to FINISHED 2014-07-23 17:00:02,805 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406090411375_0004 2014-07-23 17:00:02,805 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406007559025_0003 2014-07-23 17:00:02,805 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406007559025_0003,name=word count,user=testos,queue=a,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406007559025_0003/jobhistory/job/job_1406007559025_0003,appMasterHost=N/A,startTime=1406010949107,finishTime=1406010975382,finalStatus=SUCCEEDED 2014-07-23 17:00:02,805 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406090411375_0004 with 2 attempts and final state = KILLED 2014-07-23 17:00:02,806 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406090411375_0004_000001 with final state: FAILED 2014-07-23 17:00:02,806 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406090411375_0004_000002 with final state: KILLED 2014-07-23 17:00:02,806 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406090411375_0004_000001 State change from NEW to FAILED 2014-07-23 17:00:02,806 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406090411375_0004_000002 State change from NEW to KILLED 2014-07-23 17:00:02,806 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406090411375_0004 State change from NEW to KILLED 2014-07-23 17:00:02,806 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406007559025_0002 2014-07-23 17:00:02,806 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406090411375_0004 2014-07-23 17:00:02,806 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406090411375_0004,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406090411375_0004/,appMasterHost=N/A,startTime=1406091110999,finishTime=1406091233015,finalStatus=KILLED 2014-07-23 17:00:02,806 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406007559025_0002 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,807 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406007559025_0002_000001 with final state: FINISHED 2014-07-23 17:00:02,807 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406007559025_0002_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,807 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406007559025_0002 State change from NEW to FINISHED 2014-07-23 17:00:02,807 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406090411375_0005 2014-07-23 17:00:02,807 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406007559025_0002 2014-07-23 17:00:02,807 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406007559025_0002,name=word count,user=testos,queue=a,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406007559025_0002/jobhistory/job/job_1406007559025_0002,appMasterHost=N/A,startTime=1406010887743,finishTime=1406010946859,finalStatus=SUCCEEDED 2014-07-23 17:00:02,807 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406090411375_0005 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,807 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406090411375_0005_000001 with final state: KILLED 2014-07-23 17:00:02,807 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406090411375_0005_000001 State change from NEW to KILLED 2014-07-23 17:00:02,808 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406090411375_0005 State change from NEW to KILLED 2014-07-23 17:00:02,808 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406007559025_0001 2014-07-23 17:00:02,808 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406090411375_0005 2014-07-23 17:00:02,808 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406090411375_0005,name=Sleep job,user=testos,queue=a,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406090411375_0005/,appMasterHost=N/A,startTime=1406091278230,finishTime=1406091293019,finalStatus=KILLED 2014-07-23 17:00:02,808 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406007559025_0001 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,808 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406007559025_0001_000001 with final state: FINISHED 2014-07-23 17:00:02,808 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406007559025_0001_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,808 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406007559025_0001 State change from NEW to FINISHED 2014-07-23 17:00:02,808 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406090411375_0006 2014-07-23 17:00:02,808 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406007559025_0001 2014-07-23 17:00:02,809 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406007559025_0001,name=word count,user=testos,queue=a,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406007559025_0001/jobhistory/job/job_1406007559025_0001,appMasterHost=N/A,startTime=1406010658601,finishTime=1406010734299,finalStatus=SUCCEEDED 2014-07-23 17:00:02,809 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406090411375_0006 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,809 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406090411375_0006_000001 with final state: KILLED 2014-07-23 17:00:02,809 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406090411375_0006_000001 State change from NEW to KILLED 2014-07-23 17:00:02,809 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406090411375_0006 State change from NEW to KILLED 2014-07-23 17:00:02,809 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406007559025_0008 2014-07-23 17:00:02,809 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406090411375_0006 2014-07-23 17:00:02,809 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406090411375_0006,name=Sleep job,user=testos,queue=a,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406090411375_0006/,appMasterHost=N/A,startTime=1406091306983,finishTime=1406091313018,finalStatus=KILLED 2014-07-23 17:00:02,809 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406007559025_0008 with 0 attempts and final state = FAILED 2014-07-23 17:00:02,810 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406007559025_0008 State change from NEW to FAILED 2014-07-23 17:00:02,810 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406007559025_0007 2014-07-23 17:00:02,810 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1406007559025_0008 submitted by user testos to non-leaf queue: b APPID=application_1406007559025_0008 2014-07-23 17:00:02,810 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406007559025_0008,name=word count,user=testos,queue=b,state=FAILED,trackingUrl=N/A,appMasterHost=N/A,startTime=1406011841896,finishTime=1406011841995,finalStatus=FAILED 2014-07-23 17:00:02,810 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406007559025_0007 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,810 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406007559025_0007_000001 with final state: FINISHED 2014-07-23 17:00:02,810 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406007559025_0007_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,810 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406007559025_0007 State change from NEW to FINISHED 2014-07-23 17:00:02,810 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406007559025_0006 2014-07-23 17:00:02,810 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406007559025_0007 2014-07-23 17:00:02,811 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406007559025_0007,name=word count,user=testos,queue=a,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406007559025_0007/jobhistory/job/job_1406007559025_0007,appMasterHost=N/A,startTime=1406011539654,finishTime=1406011562773,finalStatus=SUCCEEDED 2014-07-23 17:00:02,811 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406007559025_0006 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,811 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406007559025_0006_000001 with final state: FINISHED 2014-07-23 17:00:02,811 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406007559025_0006_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,812 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406007559025_0006 State change from NEW to FINISHED 2014-07-23 17:00:02,812 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406090411375_0001 2014-07-23 17:00:02,812 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406090411375_0001 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,812 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406007559025_0006 2014-07-23 17:00:02,812 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406090411375_0001_000001 with final state: KILLED 2014-07-23 17:00:02,812 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406007559025_0006,name=word count,user=testos,queue=a,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406007559025_0006/jobhistory/job/job_1406007559025_0006,appMasterHost=N/A,startTime=1406011402951,finishTime=1406011510820,finalStatus=SUCCEEDED 2014-07-23 17:00:02,813 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406090411375_0001_000001 State change from NEW to KILLED 2014-07-23 17:00:02,813 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406090411375_0001 State change from NEW to KILLED 2014-07-23 17:00:02,813 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406007559025_0005 2014-07-23 17:00:02,813 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406090411375_0001 2014-07-23 17:00:02,813 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406090411375_0001,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406090411375_0001/,appMasterHost=N/A,startTime=1406090550287,finishTime=1406090593004,finalStatus=KILLED 2014-07-23 17:00:02,813 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406007559025_0005 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,813 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406007559025_0005_000001 with final state: FINISHED 2014-07-23 17:00:02,814 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406007559025_0005_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,814 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406007559025_0005 State change from NEW to FINISHED 2014-07-23 17:00:02,814 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406090411375_0002 2014-07-23 17:00:02,814 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406007559025_0005 2014-07-23 17:00:02,814 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406007559025_0005,name=word count,user=testos,queue=a,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406007559025_0005/jobhistory/job/job_1406007559025_0005,appMasterHost=N/A,startTime=1406011188230,finishTime=1406011284851,finalStatus=SUCCEEDED 2014-07-23 17:00:02,814 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406090411375_0002 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,814 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406090411375_0002_000001 with final state: KILLED 2014-07-23 17:00:02,814 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406090411375_0002_000001 State change from NEW to KILLED 2014-07-23 17:00:02,814 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406090411375_0002 State change from NEW to KILLED 2014-07-23 17:00:02,814 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406007559025_0012 2014-07-23 17:00:02,814 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406090411375_0002 2014-07-23 17:00:02,815 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406090411375_0002,name=Sleep job,user=testos,queue=a,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406090411375_0002/,appMasterHost=N/A,startTime=1406090550337,finishTime=1406090573006,finalStatus=KILLED 2014-07-23 17:00:02,815 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406007559025_0012 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,815 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406007559025_0012_000001 with final state: FAILED 2014-07-23 17:00:02,815 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406007559025_0012_000002 with final state: FINISHED 2014-07-23 17:00:02,815 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406007559025_0012_000001 State change from NEW to FAILED 2014-07-23 17:00:02,815 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406007559025_0012_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,815 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406007559025_0012 State change from NEW to FINISHED 2014-07-23 17:00:02,815 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406007559025_0012 2014-07-23 17:00:02,815 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406007559025_0011 2014-07-23 17:00:02,816 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406007559025_0012,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406007559025_0012/jobhistory/job/job_1406007559025_0012,appMasterHost=N/A,startTime=1406019376726,finishTime=1406019438505,finalStatus=SUCCEEDED 2014-07-23 17:00:02,816 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406007559025_0011 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,816 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406007559025_0011_000001 with final state: FINISHED 2014-07-23 17:00:02,816 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406007559025_0011_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,816 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406007559025_0011 State change from NEW to FINISHED 2014-07-23 17:00:02,816 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406007559025_0010 2014-07-23 17:00:02,816 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406007559025_0010 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,816 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406007559025_0010_000001 with final state: FINISHED 2014-07-23 17:00:02,817 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406007559025_0010_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,817 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406007559025_0010 State change from NEW to FINISHED 2014-07-23 17:00:02,817 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406007559025_0009 2014-07-23 17:00:02,817 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406007559025_0009 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,817 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406007559025_0009_000001 with final state: FINISHED 2014-07-23 17:00:02,817 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406007559025_0009_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,817 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406007559025_0009 State change from NEW to FINISHED 2014-07-23 17:00:02,818 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405920868889_0008 2014-07-23 17:00:02,818 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405920868889_0008 with 2 attempts and final state = FAILED 2014-07-23 17:00:02,818 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405920868889_0008_000001 with final state: FAILED 2014-07-23 17:00:02,818 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405920868889_0008_000002 with final state: FAILED 2014-07-23 17:00:02,819 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405920868889_0008_000001 State change from NEW to FAILED 2014-07-23 17:00:02,819 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405920868889_0008_000002 State change from NEW to FAILED 2014-07-23 17:00:02,819 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405920868889_0008 State change from NEW to FAILED 2014-07-23 17:00:02,819 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405920868889_0007 2014-07-23 17:00:02,819 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405920868889_0007 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,819 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405920868889_0007_000001 with final state: FINISHED 2014-07-23 17:00:02,820 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405920868889_0007_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,820 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406007559025_0011 2014-07-23 17:00:02,820 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405920868889_0007 State change from NEW to FINISHED 2014-07-23 17:00:02,820 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405920868889_0006 2014-07-23 17:00:02,820 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406007559025_0011,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406007559025_0011/jobhistory/job/job_1406007559025_0011,appMasterHost=N/A,startTime=1406012108141,finishTime=1406012132731,finalStatus=SUCCEEDED 2014-07-23 17:00:02,820 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406007559025_0010 2014-07-23 17:00:02,820 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405920868889_0006 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,820 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406007559025_0010,name=word count,user=testos,queue=c,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406007559025_0010/jobhistory/job/job_1406007559025_0010,appMasterHost=N/A,startTime=1406012108085,finishTime=1406012132973,finalStatus=SUCCEEDED 2014-07-23 17:00:02,820 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406007559025_0009 2014-07-23 17:00:02,820 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405920868889_0006_000001 with final state: FINISHED 2014-07-23 17:00:02,820 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406007559025_0009,name=word count,user=testos,queue=c,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406007559025_0009/jobhistory/job/job_1406007559025_0009,appMasterHost=N/A,startTime=1406012045482,finishTime=1406012067833,finalStatus=SUCCEEDED 2014-07-23 17:00:02,821 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1405920868889_0008 failed 2 times due to Attempt recovered after RM restartAM Container for appattempt_1405920868889_0008_000002 exited with exitCode: 143 due to: Container Killed by ResourceManager Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 .Failing this attempt.. Failing the application. APPID=application_1405920868889_0008 2014-07-23 17:00:02,821 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405920868889_0006_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,821 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405920868889_0006 State change from NEW to FINISHED 2014-07-23 17:00:02,821 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405920868889_0008,name=Sleep job,user=testos,queue=default,state=FAILED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405920868889_0008/,appMasterHost=N/A,startTime=1405932082392,finishTime=1405932583778,finalStatus=FAILED 2014-07-23 17:00:02,821 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405920868889_0005 2014-07-23 17:00:02,821 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405920868889_0007 2014-07-23 17:00:02,821 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405920868889_0007,name=Sleep job,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405920868889_0007/jobhistory/job/job_1405920868889_0007,appMasterHost=N/A,startTime=1405925571745,finishTime=1405926282725,finalStatus=SUCCEEDED 2014-07-23 17:00:02,821 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405920868889_0005 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,821 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405920868889_0006 2014-07-23 17:00:02,821 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405920868889_0005_000001 with final state: FINISHED 2014-07-23 17:00:02,821 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405920868889_0006,name=Sleep job,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405920868889_0006/jobhistory/job/job_1405920868889_0006,appMasterHost=N/A,startTime=1405925182288,finishTime=1405925200950,finalStatus=SUCCEEDED 2014-07-23 17:00:02,822 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405920868889_0005_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,822 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405920868889_0005 State change from NEW to FINISHED 2014-07-23 17:00:02,822 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405920868889_0004 2014-07-23 17:00:02,822 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405920868889_0005 2014-07-23 17:00:02,822 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405920868889_0005,name=Sleep job,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405920868889_0005/jobhistory/job/job_1405920868889_0005,appMasterHost=N/A,startTime=1405924950547,finishTime=1405925092365,finalStatus=SUCCEEDED 2014-07-23 17:00:02,822 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405920868889_0004 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,822 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405920868889_0004_000001 with final state: FINISHED 2014-07-23 17:00:02,823 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405920868889_0004_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,823 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405920868889_0004 State change from NEW to FINISHED 2014-07-23 17:00:02,823 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405920868889_0003 2014-07-23 17:00:02,823 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405920868889_0004 2014-07-23 17:00:02,823 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405920868889_0004,name=Sleep job,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405920868889_0004/jobhistory/job/job_1405920868889_0004,appMasterHost=N/A,startTime=1405924800261,finishTime=1405924940697,finalStatus=SUCCEEDED 2014-07-23 17:00:02,823 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405920868889_0003 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,823 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405920868889_0003_000001 with final state: FINISHED 2014-07-23 17:00:02,824 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405920868889_0003_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,824 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405920868889_0003 State change from NEW to FINISHED 2014-07-23 17:00:02,824 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405920868889_0002 2014-07-23 17:00:02,824 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405920868889_0003 2014-07-23 17:00:02,824 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405920868889_0003,name=Sleep job,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405920868889_0003/jobhistory/job/job_1405920868889_0003,appMasterHost=N/A,startTime=1405924712730,finishTime=1405924781483,finalStatus=SUCCEEDED 2014-07-23 17:00:02,824 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405920868889_0002 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,824 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405920868889_0002_000001 with final state: FINISHED 2014-07-23 17:00:02,825 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405920868889_0002_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,825 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405920868889_0002 State change from NEW to FINISHED 2014-07-23 17:00:02,825 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405920868889_0002 2014-07-23 17:00:02,825 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405920868889_0001 2014-07-23 17:00:02,825 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405920868889_0002,name=Sleep job,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405920868889_0002/jobhistory/job/job_1405920868889_0002,appMasterHost=N/A,startTime=1405924621582,finishTime=1405924688299,finalStatus=SUCCEEDED 2014-07-23 17:00:02,825 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405920868889_0001 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,825 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405920868889_0001_000001 with final state: FINISHED 2014-07-23 17:00:02,826 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405920868889_0001_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,826 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405920868889_0001 State change from NEW to FINISHED 2014-07-23 17:00:02,826 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406039105488_0002 2014-07-23 17:00:02,826 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405920868889_0001 2014-07-23 17:00:02,826 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405920868889_0001,name=Sleep job,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405920868889_0001/jobhistory/job/job_1405920868889_0001,appMasterHost=N/A,startTime=1405922358221,finishTime=1405922375365,finalStatus=SUCCEEDED 2014-07-23 17:00:02,826 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406039105488_0002 with 2 attempts and final state = KILLED 2014-07-23 17:00:02,826 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406039105488_0002_000001 with final state: FAILED 2014-07-23 17:00:02,827 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406039105488_0002_000002 with final state: KILLED 2014-07-23 17:00:02,827 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406039105488_0002_000001 State change from NEW to FAILED 2014-07-23 17:00:02,827 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406039105488_0002_000002 State change from NEW to KILLED 2014-07-23 17:00:02,827 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406039105488_0002 State change from NEW to KILLED 2014-07-23 17:00:02,827 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406039105488_0002 2014-07-23 17:00:02,827 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406039105488_0001 2014-07-23 17:00:02,827 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406039105488_0002,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406039105488_0002/,appMasterHost=N/A,startTime=1406039154284,finishTime=1406039345422,finalStatus=KILLED 2014-07-23 17:00:02,828 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406039105488_0001 with 0 attempts and final state = FAILED 2014-07-23 17:00:02,828 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406039105488_0001 State change from NEW to FAILED 2014-07-23 17:00:02,828 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0005 2014-07-23 17:00:02,828 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1406039105488_0001 submitted by user testos to non-leaf queue: a APPID=application_1406039105488_0001 2014-07-23 17:00:02,828 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406039105488_0001,name=Sleep job,user=testos,queue=a,state=FAILED,trackingUrl=N/A,appMasterHost=N/A,startTime=1406039154268,finishTime=1406039154338,finalStatus=FAILED 2014-07-23 17:00:02,828 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0005 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,828 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0005_000001 with final state: FINISHED 2014-07-23 17:00:02,829 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0005_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,829 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0005 State change from NEW to FINISHED 2014-07-23 17:00:02,829 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0004 2014-07-23 17:00:02,829 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0005 2014-07-23 17:00:02,829 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0005,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0005/jobhistory/job/job_1405935231196_0005,appMasterHost=N/A,startTime=1405938276611,finishTime=1405938306181,finalStatus=SUCCEEDED 2014-07-23 17:00:02,829 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0004 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,829 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0004_000001 with final state: FINISHED 2014-07-23 17:00:02,830 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0004_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,830 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0004 State change from NEW to FINISHED 2014-07-23 17:00:02,830 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0004 2014-07-23 17:00:02,830 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0007 2014-07-23 17:00:02,830 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0004,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0004/jobhistory/job/job_1405935231196_0004,appMasterHost=N/A,startTime=1405938149595,finishTime=1405938210094,finalStatus=FAILED 2014-07-23 17:00:02,830 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0007 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,830 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0007_000001 with final state: FINISHED 2014-07-23 17:00:02,831 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0007_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,831 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0007 State change from NEW to FINISHED 2014-07-23 17:00:02,831 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0006 2014-07-23 17:00:02,831 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0007 2014-07-23 17:00:02,831 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0007,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0007/jobhistory/job/job_1405935231196_0007,appMasterHost=N/A,startTime=1405938735492,finishTime=1405938764230,finalStatus=SUCCEEDED 2014-07-23 17:00:02,831 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0006 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,831 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0006_000001 with final state: FINISHED 2014-07-23 17:00:02,832 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0006_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,832 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0006 State change from NEW to FINISHED 2014-07-23 17:00:02,832 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405932581275_0001 2014-07-23 17:00:02,832 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0006 2014-07-23 17:00:02,832 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0006,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0006/jobhistory/job/job_1405935231196_0006,appMasterHost=N/A,startTime=1405938331798,finishTime=1405938375297,finalStatus=SUCCEEDED 2014-07-23 17:00:02,832 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405932581275_0001 with 2 attempts and final state = FAILED 2014-07-23 17:00:02,832 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405932581275_0001_000001 with final state: FAILED 2014-07-23 17:00:02,833 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405932581275_0001_000002 with final state: FAILED 2014-07-23 17:00:02,833 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405932581275_0001_000001 State change from NEW to FAILED 2014-07-23 17:00:02,833 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405932581275_0001_000002 State change from NEW to FAILED 2014-07-23 17:00:02,833 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405932581275_0001 State change from NEW to FAILED 2014-07-23 17:00:02,833 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0001 2014-07-23 17:00:02,834 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0001 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,834 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0001_000001 with final state: FINISHED 2014-07-23 17:00:02,834 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0001_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,834 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0001 State change from NEW to FINISHED 2014-07-23 17:00:02,834 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0003 2014-07-23 17:00:02,835 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0003 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,835 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0003_000001 with final state: FINISHED 2014-07-23 17:00:02,835 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0003_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,835 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0003 State change from NEW to FINISHED 2014-07-23 17:00:02,835 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935231196_0002 2014-07-23 17:00:02,836 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935231196_0002 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,836 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935231196_0002_000001 with final state: FINISHED 2014-07-23 17:00:02,836 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935231196_0002_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,836 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935231196_0002 State change from NEW to FINISHED 2014-07-23 17:00:02,836 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406090411375_0009 2014-07-23 17:00:02,837 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406090411375_0009 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,837 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406090411375_0009_000001 with final state: KILLED 2014-07-23 17:00:02,837 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406090411375_0009_000001 State change from NEW to KILLED 2014-07-23 17:00:02,837 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406090411375_0009 State change from NEW to KILLED 2014-07-23 17:00:02,837 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406090411375_0008 2014-07-23 17:00:02,838 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406090411375_0008 with 2 attempts and final state = FAILED 2014-07-23 17:00:02,838 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406090411375_0008_000001 with final state: FAILED 2014-07-23 17:00:02,838 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406090411375_0008_000002 with final state: FAILED 2014-07-23 17:00:02,838 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406090411375_0008_000001 State change from NEW to FAILED 2014-07-23 17:00:02,839 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406090411375_0008_000002 State change from NEW to FAILED 2014-07-23 17:00:02,839 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406090411375_0008 State change from NEW to FAILED 2014-07-23 17:00:02,839 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406090411375_0007 2014-07-23 17:00:02,839 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406090411375_0007 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,839 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406090411375_0007_000001 with final state: FAILED 2014-07-23 17:00:02,840 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406090411375_0007_000002 with final state: FINISHED 2014-07-23 17:00:02,840 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406090411375_0007_000001 State change from NEW to FAILED 2014-07-23 17:00:02,840 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406090411375_0007_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,840 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406090411375_0007 State change from NEW to FINISHED 2014-07-23 17:00:02,840 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406031222625_0005 2014-07-23 17:00:02,840 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406031222625_0005 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,841 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406031222625_0005_000001 with final state: KILLED 2014-07-23 17:00:02,841 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406031222625_0005_000001 State change from NEW to KILLED 2014-07-23 17:00:02,841 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406031222625_0005 State change from NEW to KILLED 2014-07-23 17:00:02,841 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406031222625_0006 2014-07-23 17:00:02,841 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406031222625_0006 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,842 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406031222625_0006_000001 with final state: KILLED 2014-07-23 17:00:02,842 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406031222625_0006_000001 State change from NEW to KILLED 2014-07-23 17:00:02,842 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406031222625_0006 State change from NEW to KILLED 2014-07-23 17:00:02,842 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406031222625_0003 2014-07-23 17:00:02,842 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406031222625_0003 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,842 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406031222625_0003_000001 with final state: FINISHED 2014-07-23 17:00:02,843 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406031222625_0003_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,843 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406031222625_0003 State change from NEW to FINISHED 2014-07-23 17:00:02,843 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406031222625_0004 2014-07-23 17:00:02,843 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406031222625_0004 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,843 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406031222625_0004_000001 with final state: FINISHED 2014-07-23 17:00:02,844 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1405932581275_0001 failed 2 times due to Attempt recovered after RM restartAM Container for appattempt_1405932581275_0001_000002 exited with exitCode: 143 due to: Container Killed by ResourceManager Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 .Failing this attempt.. Failing the application. APPID=application_1405932581275_0001 2014-07-23 17:00:02,844 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406031222625_0004_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,844 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406031222625_0004 State change from NEW to FINISHED 2014-07-23 17:00:02,844 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405932581275_0001,name=Sleep job,user=testos,queue=default,state=FAILED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405932581275_0001/,appMasterHost=N/A,startTime=1405932752221,finishTime=1405932928057,finalStatus=FAILED 2014-07-23 17:00:02,844 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406031222625_0001 2014-07-23 17:00:02,844 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0001 2014-07-23 17:00:02,850 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0001,name=Sleep job,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0001/jobhistory/job/job_1405935231196_0001,appMasterHost=N/A,startTime=1405935552457,finishTime=1405935794372,finalStatus=SUCCEEDED 2014-07-23 17:00:02,850 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0003 2014-07-23 17:00:02,850 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0003,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0003/jobhistory/job/job_1405935231196_0003,appMasterHost=N/A,startTime=1405937950605,finishTime=1405938079951,finalStatus=SUCCEEDED 2014-07-23 17:00:02,850 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406031222625_0001 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,850 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935231196_0002 2014-07-23 17:00:02,851 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935231196_0002,name=Sleep job,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935231196_0002/jobhistory/job/job_1405935231196_0002,appMasterHost=N/A,startTime=1405937167757,finishTime=1405937741465,finalStatus=SUCCEEDED 2014-07-23 17:00:02,851 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406031222625_0001_000001 with final state: KILLED 2014-07-23 17:00:02,851 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406090411375_0009 2014-07-23 17:00:02,851 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406090411375_0009,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406090411375_0009/,appMasterHost=N/A,startTime=1406091485679,finishTime=1406091513022,finalStatus=KILLED 2014-07-23 17:00:02,851 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406031222625_0001_000001 State change from NEW to KILLED 2014-07-23 17:00:02,851 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1406090411375_0008 failed 2 times due to ApplicationMaster for attempt appattempt_1406090411375_0008_000002 timed out. Failing the application. APPID=application_1406090411375_0008 2014-07-23 17:00:02,851 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406031222625_0001 State change from NEW to KILLED 2014-07-23 17:00:02,851 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406031222625_0002 2014-07-23 17:00:02,851 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406090411375_0008,name=Sleep job,user=testos,queue=b,state=FAILED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406090411375_0008/,appMasterHost=N/A,startTime=1406091438122,finishTime=1406092609987,finalStatus=FAILED 2014-07-23 17:00:02,851 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406090411375_0007 2014-07-23 17:00:02,852 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406090411375_0007,name=Sleep job,user=testos,queue=a,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406090411375_0007/jobhistory/job/job_1406090411375_0007,appMasterHost=N/A,startTime=1406091327735,finishTime=1406091646644,finalStatus=FAILED 2014-07-23 17:00:02,852 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406031222625_0005 2014-07-23 17:00:02,852 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406031222625_0005,name=Sleep job,user=testos,queue=a,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406031222625_0005/,appMasterHost=N/A,startTime=1406032036368,finishTime=1406032301090,finalStatus=KILLED 2014-07-23 17:00:02,852 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406031222625_0006 2014-07-23 17:00:02,852 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406031222625_0006,name=Sleep job,user=testos,queue=c,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406031222625_0006/,appMasterHost=N/A,startTime=1406032132271,finishTime=1406032307341,finalStatus=KILLED 2014-07-23 17:00:02,852 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406031222625_0003 2014-07-23 17:00:02,852 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406031222625_0003,name=Sleep job,user=testos,queue=c,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406031222625_0003/jobhistory/job/job_1406031222625_0003,appMasterHost=N/A,startTime=1406031926367,finishTime=1406031951174,finalStatus=FAILED 2014-07-23 17:00:02,853 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406031222625_0004 2014-07-23 17:00:02,853 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406031222625_0004,name=Sleep job,user=testos,queue=c,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406031222625_0004/jobhistory/job/job_1406031222625_0004,appMasterHost=N/A,startTime=1406031989817,finishTime=1406032043214,finalStatus=FAILED 2014-07-23 17:00:02,853 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406031222625_0001 2014-07-23 17:00:02,853 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406031222625_0001,name=Sleep job,user=testos,queue=c,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406031222625_0001/,appMasterHost=N/A,startTime=1406031441616,finishTime=1406031793702,finalStatus=KILLED 2014-07-23 17:00:02,854 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406031222625_0002 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,854 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406031222625_0002_000001 with final state: KILLED 2014-07-23 17:00:02,854 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406031222625_0002_000001 State change from NEW to KILLED 2014-07-23 17:00:02,854 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406031222625_0002 State change from NEW to KILLED 2014-07-23 17:00:02,854 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406095945156_0001 2014-07-23 17:00:02,855 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406095945156_0001 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,855 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406031222625_0002 2014-07-23 17:00:02,855 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406031222625_0002,name=Sleep job,user=testos,queue=a,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406031222625_0002/,appMasterHost=N/A,startTime=1406031441639,finishTime=1406031789738,finalStatus=KILLED 2014-07-23 17:00:02,855 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406095945156_0001_000001 with final state: KILLED 2014-07-23 17:00:02,856 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406095945156_0001_000001 State change from NEW to KILLED 2014-07-23 17:00:02,856 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406095945156_0001 State change from NEW to KILLED 2014-07-23 17:00:02,856 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406095945156_0001 2014-07-23 17:00:02,856 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406095945156_0001,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406095945156_0001/,appMasterHost=N/A,startTime=1406096042998,finishTime=1406096163906,finalStatus=KILLED 2014-07-23 17:00:02,856 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406039240027_0001 2014-07-23 17:00:02,856 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406039240027_0001 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,857 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406039240027_0001_000001 with final state: KILLED 2014-07-23 17:00:02,857 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406039240027_0001_000001 State change from NEW to KILLED 2014-07-23 17:00:02,857 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406039240027_0001 State change from NEW to KILLED 2014-07-23 17:00:02,857 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406039240027_0001 2014-07-23 17:00:02,857 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406039240027_0001,name=Sleep job,user=testos,queue=a,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406039240027_0001/,appMasterHost=N/A,startTime=1406039299334,finishTime=1406039339265,finalStatus=KILLED 2014-07-23 17:00:02,858 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406039240027_0002 2014-07-23 17:00:02,858 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406039240027_0002 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,858 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406039240027_0002_000001 with final state: FINISHED 2014-07-23 17:00:02,858 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406039240027_0002_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,858 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406039240027_0002 State change from NEW to FINISHED 2014-07-23 17:00:02,859 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406039240027_0002 2014-07-23 17:00:02,859 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406039240027_0002,name=Sleep job,user=testos,queue=a,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406039240027_0002/jobhistory/job/job_1406039240027_0002,appMasterHost=N/A,startTime=1406039358377,finishTime=1406039511154,finalStatus=FAILED 2014-07-23 17:00:02,859 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405934731917_0001 2014-07-23 17:00:02,859 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405934731917_0001 with 2 attempts and final state = FAILED 2014-07-23 17:00:02,859 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405934731917_0001_000001 with final state: FAILED 2014-07-23 17:00:02,860 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405934731917_0001_000002 with final state: FAILED 2014-07-23 17:00:02,860 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405934731917_0001_000001 State change from NEW to FAILED 2014-07-23 17:00:02,860 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405934731917_0001_000002 State change from NEW to FAILED 2014-07-23 17:00:02,860 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405934731917_0001 State change from NEW to FAILED 2014-07-23 17:00:02,860 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1405934731917_0001 failed 2 times due to Attempt recovered after RM restartAM Container for appattempt_1405934731917_0001_000002 exited with exitCode: 143 due to: Container Killed by ResourceManager Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 .Failing this attempt.. Failing the application. APPID=application_1405934731917_0001 2014-07-23 17:00:02,860 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405934731917_0001,name=Sleep job,user=testos,queue=default,state=FAILED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405934731917_0001/,appMasterHost=N/A,startTime=1405934805763,finishTime=1405935055211,finalStatus=FAILED 2014-07-23 17:00:02,860 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0019 2014-07-23 17:00:02,862 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0019 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,862 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0019_000001 with final state: FINISHED 2014-07-23 17:00:02,862 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0019_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,872 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0019 State change from NEW to FINISHED 2014-07-23 17:00:02,873 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0019 2014-07-23 17:00:02,873 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0019,name=Sleep job,user=testos,queue=a2,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0019/jobhistory/job/job_1406035038624_0019,appMasterHost=N/A,startTime=1406037919227,finishTime=1406038467702,finalStatus=SUCCEEDED 2014-07-23 17:00:02,873 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406097392962_0004 2014-07-23 17:00:02,873 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406097392962_0004 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,873 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406097392962_0004_000001 with final state: KILLED 2014-07-23 17:00:02,874 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406097392962_0004_000001 State change from NEW to KILLED 2014-07-23 17:00:02,874 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406097392962_0004 State change from NEW to KILLED 2014-07-23 17:00:02,874 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406097392962_0004 2014-07-23 17:00:02,874 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406097392962_0004,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406097392962_0004/,appMasterHost=N/A,startTime=1406098902863,finishTime=1406098913151,finalStatus=KILLED 2014-07-23 17:00:02,874 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0018 2014-07-23 17:00:02,875 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0018 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,875 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0018_000001 with final state: FINISHED 2014-07-23 17:00:02,875 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0018_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,875 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0018 State change from NEW to FINISHED 2014-07-23 17:00:02,875 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0018 2014-07-23 17:00:02,876 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0018,name=Sleep job,user=testos,queue=b,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0018/jobhistory/job/job_1406035038624_0018,appMasterHost=N/A,startTime=1406037920758,finishTime=1406038053830,finalStatus=FAILED 2014-07-23 17:00:02,876 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406003634132_0001 2014-07-23 17:00:02,876 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406003634132_0001 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,876 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406003634132_0001_000001 with final state: FAILED 2014-07-23 17:00:02,877 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406003634132_0001_000002 with final state: FINISHED 2014-07-23 17:00:02,877 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406003634132_0001_000001 State change from NEW to FAILED 2014-07-23 17:00:02,877 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406003634132_0001_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,877 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406003634132_0001 State change from NEW to FINISHED 2014-07-23 17:00:02,877 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406003634132_0001 2014-07-23 17:00:02,878 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406003634132_0001,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406003634132_0001/jobhistory/job/job_1406003634132_0001,appMasterHost=N/A,startTime=1406003819292,finishTime=1406003972913,finalStatus=SUCCEEDED 2014-07-23 17:00:02,878 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0017 2014-07-23 17:00:02,878 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0017 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,878 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0017_000001 with final state: FAILED 2014-07-23 17:00:02,878 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0017_000002 with final state: FINISHED 2014-07-23 17:00:02,879 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0017_000001 State change from NEW to FAILED 2014-07-23 17:00:02,879 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0017_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,879 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0017 State change from NEW to FINISHED 2014-07-23 17:00:02,879 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0017 2014-07-23 17:00:02,879 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0017,name=Sleep job,user=testos,queue=a1,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0017/jobhistory/job/job_1406035038624_0017,appMasterHost=N/A,startTime=1406037919070,finishTime=1406039066110,finalStatus=SUCCEEDED 2014-07-23 17:00:02,879 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406097392962_0002 2014-07-23 17:00:02,880 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406097392962_0002 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,880 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406097392962_0002_000001 with final state: KILLED 2014-07-23 17:00:02,880 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406097392962_0002_000001 State change from NEW to KILLED 2014-07-23 17:00:02,880 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406097392962_0002 State change from NEW to KILLED 2014-07-23 17:00:02,881 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406097392962_0002 2014-07-23 17:00:02,881 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406097392962_0002,name=Sleep job,user=testos,queue=a,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406097392962_0002/,appMasterHost=N/A,startTime=1406097727480,finishTime=1406098329040,finalStatus=KILLED 2014-07-23 17:00:02,881 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406003634132_0002 2014-07-23 17:00:02,881 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406003634132_0002 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,881 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406003634132_0002_000001 with final state: FAILED 2014-07-23 17:00:02,882 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406003634132_0002_000002 with final state: FINISHED 2014-07-23 17:00:02,882 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406003634132_0002_000001 State change from NEW to FAILED 2014-07-23 17:00:02,882 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406003634132_0002_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,882 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406003634132_0002 State change from NEW to FINISHED 2014-07-23 17:00:02,882 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406003634132_0002 2014-07-23 17:00:02,882 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406003634132_0002,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406003634132_0002/jobhistory/job/job_1406003634132_0002,appMasterHost=N/A,startTime=1406003821039,finishTime=1406003975208,finalStatus=SUCCEEDED 2014-07-23 17:00:02,882 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0016 2014-07-23 17:00:02,883 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0016 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,883 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0016_000001 with final state: FINISHED 2014-07-23 17:00:02,883 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0016_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,884 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0016 State change from NEW to FINISHED 2014-07-23 17:00:02,884 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0016 2014-07-23 17:00:02,884 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0016,name=Sleep job,user=testos,queue=a2,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0016/jobhistory/job/job_1406035038624_0016,appMasterHost=N/A,startTime=1406037661555,finishTime=1406037871840,finalStatus=SUCCEEDED 2014-07-23 17:00:02,884 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406097392962_0003 2014-07-23 17:00:02,884 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406097392962_0003 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,884 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406097392962_0003_000001 with final state: KILLED 2014-07-23 17:00:02,885 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406097392962_0003_000001 State change from NEW to KILLED 2014-07-23 17:00:02,891 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406097392962_0003 State change from NEW to KILLED 2014-07-23 17:00:02,891 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0015 2014-07-23 17:00:02,891 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406097392962_0003 2014-07-23 17:00:02,891 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406097392962_0003,name=Sleep job,user=testos,queue=a,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406097392962_0003/,appMasterHost=N/A,startTime=1406097752072,finishTime=1406097758933,finalStatus=KILLED 2014-07-23 17:00:02,891 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0015 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,891 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0015_000001 with final state: KILLED 2014-07-23 17:00:02,892 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0015_000001 State change from NEW to KILLED 2014-07-23 17:00:02,892 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0015 State change from NEW to KILLED 2014-07-23 17:00:02,892 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0014 2014-07-23 17:00:02,892 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0015 2014-07-23 17:00:02,892 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0015,name=Sleep job,user=testos,queue=b,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0015/,appMasterHost=N/A,startTime=1406037661485,finishTime=1406037911747,finalStatus=KILLED 2014-07-23 17:00:02,892 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0014 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,892 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0014_000001 with final state: KILLED 2014-07-23 17:00:02,893 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0014_000001 State change from NEW to KILLED 2014-07-23 17:00:02,893 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0014 State change from NEW to KILLED 2014-07-23 17:00:02,893 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0014 2014-07-23 17:00:02,893 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406097392962_0001 2014-07-23 17:00:02,893 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0014,name=Sleep job,user=testos,queue=a1,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0014/,appMasterHost=N/A,startTime=1406037661211,finishTime=1406037717391,finalStatus=KILLED 2014-07-23 17:00:02,893 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406097392962_0001 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,893 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406097392962_0001_000001 with final state: KILLED 2014-07-23 17:00:02,894 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406097392962_0001_000001 State change from NEW to KILLED 2014-07-23 17:00:02,894 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406097392962_0001 State change from NEW to KILLED 2014-07-23 17:00:02,894 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0013 2014-07-23 17:00:02,894 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406097392962_0001 2014-07-23 17:00:02,894 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406097392962_0001,name=Sleep job,user=testos,queue=a,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406097392962_0001/,appMasterHost=N/A,startTime=1406097727471,finishTime=1406097732942,finalStatus=KILLED 2014-07-23 17:00:02,894 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0013 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,894 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0013_000001 with final state: KILLED 2014-07-23 17:00:02,894 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0013_000001 State change from NEW to KILLED 2014-07-23 17:00:02,895 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0013 State change from NEW to KILLED 2014-07-23 17:00:02,895 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406035038624_0012 2014-07-23 17:00:02,895 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0013 2014-07-23 17:00:02,895 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0013,name=Sleep job,user=testos,queue=a2,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0013/,appMasterHost=N/A,startTime=1406037473301,finishTime=1406037539906,finalStatus=KILLED 2014-07-23 17:00:02,895 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406035038624_0012 with 1 attempts and final state = KILLED 2014-07-23 17:00:02,895 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406035038624_0012_000001 with final state: KILLED 2014-07-23 17:00:02,895 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406035038624_0012_000001 State change from NEW to KILLED 2014-07-23 17:00:02,895 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406035038624_0012 State change from NEW to KILLED 2014-07-23 17:00:02,895 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406035038624_0012 2014-07-23 17:00:02,895 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405950818845_0006 2014-07-23 17:00:02,896 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406035038624_0012,name=Sleep job,user=testos,queue=a1,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406035038624_0012/,appMasterHost=N/A,startTime=1406037473111,finishTime=1406037641281,finalStatus=KILLED 2014-07-23 17:00:02,896 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405950818845_0006 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,896 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405950818845_0006_000001 with final state: FAILED 2014-07-23 17:00:02,897 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405950818845_0006_000002 with final state: FINISHED 2014-07-23 17:00:02,897 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405950818845_0006_000001 State change from NEW to FAILED 2014-07-23 17:00:02,897 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405950818845_0006_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,897 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405950818845_0006 State change from NEW to FINISHED 2014-07-23 17:00:02,897 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405950818845_0005 2014-07-23 17:00:02,897 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405950818845_0006 2014-07-23 17:00:02,897 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405950818845_0006,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405950818845_0006/jobhistory/job/job_1405950818845_0006,appMasterHost=N/A,startTime=1406002926251,finishTime=1406003072425,finalStatus=SUCCEEDED 2014-07-23 17:00:02,897 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405950818845_0005 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,898 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405950818845_0005_000001 with final state: FAILED 2014-07-23 17:00:02,898 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405950818845_0005_000002 with final state: FINISHED 2014-07-23 17:00:02,898 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405950818845_0005_000001 State change from NEW to FAILED 2014-07-23 17:00:02,898 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405950818845_0005_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,899 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405950818845_0005 State change from NEW to FINISHED 2014-07-23 17:00:02,899 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405950818845_0005 2014-07-23 17:00:02,899 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405950818845_0005,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405950818845_0005/jobhistory/job/job_1405950818845_0005,appMasterHost=N/A,startTime=1406002926080,finishTime=1406003040748,finalStatus=SUCCEEDED 2014-07-23 17:00:02,899 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406030177639_0001 2014-07-23 17:00:02,899 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406030177639_0001 with 2 attempts and final state = FAILED 2014-07-23 17:00:02,899 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406030177639_0001_000001 with final state: FAILED 2014-07-23 17:00:02,900 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406030177639_0001_000002 with final state: FAILED 2014-07-23 17:00:02,900 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406030177639_0001_000001 State change from NEW to FAILED 2014-07-23 17:00:02,900 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406030177639_0001_000002 State change from NEW to FAILED 2014-07-23 17:00:02,900 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406030177639_0001 State change from NEW to FAILED 2014-07-23 17:00:02,900 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1406030177639_0001 failed 2 times due to Attempt recovered after RM restartAM Container for appattempt_1406030177639_0001_000002 exited with exitCode: 1 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: org.apache.hadoop.util.Shell$ExitCodeException: at org.apache.hadoop.util.Shell.runCommand(Shell.java:505) at org.apache.hadoop.util.Shell.run(Shell.java:418) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:208) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Container exited with a non-zero exit code 1 .Failing this attempt.. Failing the application. APPID=application_1406030177639_0001 2014-07-23 17:00:02,900 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406030177639_0001,name=Sleep job,user=testos,queue=c,state=FAILED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406030177639_0001/,appMasterHost=N/A,startTime=1406031016886,finishTime=1406031279228,finalStatus=FAILED 2014-07-23 17:00:02,901 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406030177639_0002 2014-07-23 17:00:02,901 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406030177639_0002 with 2 attempts and final state = FAILED 2014-07-23 17:00:02,901 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406030177639_0002_000001 with final state: FAILED 2014-07-23 17:00:02,902 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406030177639_0002_000002 with final state: FAILED 2014-07-23 17:00:02,902 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406030177639_0002_000001 State change from NEW to FAILED 2014-07-23 17:00:02,902 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406030177639_0002_000002 State change from NEW to FAILED 2014-07-23 17:00:02,902 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406030177639_0002 State change from NEW to FAILED 2014-07-23 17:00:02,902 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405950818845_0004 2014-07-23 17:00:02,902 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1406030177639_0002 failed 2 times due to Attempt recovered after RM restartAM Container for appattempt_1406030177639_0002_000002 exited with exitCode: 1 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: org.apache.hadoop.util.Shell$ExitCodeException: at org.apache.hadoop.util.Shell.runCommand(Shell.java:505) at org.apache.hadoop.util.Shell.run(Shell.java:418) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:208) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Container exited with a non-zero exit code 1 .Failing this attempt.. Failing the application. APPID=application_1406030177639_0002 2014-07-23 17:00:02,902 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406030177639_0002,name=Sleep job,user=testos,queue=a,state=FAILED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406030177639_0002/,appMasterHost=N/A,startTime=1406031061282,finishTime=1406031279275,finalStatus=FAILED 2014-07-23 17:00:02,902 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405950818845_0004 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,903 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405950818845_0004_000001 with final state: FINISHED 2014-07-23 17:00:02,903 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405950818845_0004_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,903 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405950818845_0004 State change from NEW to FINISHED 2014-07-23 17:00:02,903 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405950818845_0004 2014-07-23 17:00:02,903 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405950818845_0004,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405950818845_0004/jobhistory/job/job_1405950818845_0004,appMasterHost=N/A,startTime=1405951378177,finishTime=1405951413282,finalStatus=SUCCEEDED 2014-07-23 17:00:02,903 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405950818845_0003 2014-07-23 17:00:02,903 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405950818845_0003 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,904 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405950818845_0003_000001 with final state: FINISHED 2014-07-23 17:00:02,904 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405950818845_0003_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,904 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405950818845_0003 State change from NEW to FINISHED 2014-07-23 17:00:02,904 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405950818845_0003 2014-07-23 17:00:02,904 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405950818845_0003,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405950818845_0003/jobhistory/job/job_1405950818845_0003,appMasterHost=N/A,startTime=1405951375516,finishTime=1405951395668,finalStatus=FAILED 2014-07-23 17:00:02,904 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405950818845_0002 2014-07-23 17:00:02,905 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405950818845_0002 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,905 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405950818845_0002_000001 with final state: FINISHED 2014-07-23 17:00:02,905 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405950818845_0002_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,905 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405950818845_0002 State change from NEW to FINISHED 2014-07-23 17:00:02,905 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405950818845_0001 2014-07-23 17:00:02,905 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405950818845_0002 2014-07-23 17:00:02,905 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405950818845_0002,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405950818845_0002/jobhistory/job/job_1405950818845_0002,appMasterHost=N/A,startTime=1405950990991,finishTime=1405951024789,finalStatus=SUCCEEDED 2014-07-23 17:00:02,906 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405950818845_0001 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,906 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405950818845_0001_000001 with final state: FINISHED 2014-07-23 17:00:02,906 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405950818845_0001_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,906 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405950818845_0001 State change from NEW to FINISHED 2014-07-23 17:00:02,906 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405950818845_0001 2014-07-23 17:00:02,906 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406039240027_0004 2014-07-23 17:00:02,906 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405950818845_0001,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405950818845_0001/jobhistory/job/job_1405950818845_0001,appMasterHost=N/A,startTime=1405950990977,finishTime=1405951025984,finalStatus=SUCCEEDED 2014-07-23 17:00:02,907 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406039240027_0004 with 2 attempts and final state = KILLED 2014-07-23 17:00:02,907 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406039240027_0004_000001 with final state: FAILED 2014-07-23 17:00:02,907 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406039240027_0004_000002 with final state: KILLED 2014-07-23 17:00:02,907 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406039240027_0004_000001 State change from NEW to FAILED 2014-07-23 17:00:02,907 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406039240027_0004_000002 State change from NEW to KILLED 2014-07-23 17:00:02,907 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406039240027_0004 State change from NEW to KILLED 2014-07-23 17:00:02,907 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406039240027_0003 2014-07-23 17:00:02,907 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406039240027_0004 2014-07-23 17:00:02,907 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406039240027_0004,name=Sleep job,user=testos,queue=a,state=KILLED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406039240027_0004/,appMasterHost=N/A,startTime=1406039712145,finishTime=1406040282064,finalStatus=KILLED 2014-07-23 17:00:02,907 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406039240027_0003 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,908 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406039240027_0003_000001 with final state: FINISHED 2014-07-23 17:00:02,908 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406039240027_0003_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,908 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406039240027_0003 State change from NEW to FINISHED 2014-07-23 17:00:02,908 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406039240027_0005 2014-07-23 17:00:02,908 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406039240027_0003 2014-07-23 17:00:02,908 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406039240027_0003,name=Sleep job,user=testos,queue=b,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406039240027_0003/jobhistory/job/job_1406039240027_0003,appMasterHost=N/A,startTime=1406039358483,finishTime=1406039766658,finalStatus=SUCCEEDED 2014-07-23 17:00:02,908 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406039240027_0005 with 1 attempts and final state = FINISHED 2014-07-23 17:00:02,908 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406039240027_0005_000001 with final state: FINISHED 2014-07-23 17:00:02,908 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406039240027_0005_000001 State change from NEW to FINISHED 2014-07-23 17:00:02,909 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406039240027_0005 State change from NEW to FINISHED 2014-07-23 17:00:02,909 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406004413939_0002 2014-07-23 17:00:02,909 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406039240027_0005 2014-07-23 17:00:02,909 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406039240027_0005,name=Sleep job,user=testos,queue=b,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406039240027_0005/jobhistory/job/job_1406039240027_0005,appMasterHost=N/A,startTime=1406039806541,finishTime=1406040141367,finalStatus=SUCCEEDED 2014-07-23 17:00:02,909 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406004413939_0002 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,909 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406004413939_0002_000001 with final state: FAILED 2014-07-23 17:00:02,909 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406004413939_0002_000002 with final state: FINISHED 2014-07-23 17:00:02,909 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406004413939_0002_000001 State change from NEW to FAILED 2014-07-23 17:00:02,909 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406004413939_0002_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,909 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406004413939_0002 State change from NEW to FINISHED 2014-07-23 17:00:02,909 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1406004413939_0001 2014-07-23 17:00:02,909 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406004413939_0002 2014-07-23 17:00:02,910 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406004413939_0002,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406004413939_0002/jobhistory/job/job_1406004413939_0002,appMasterHost=N/A,startTime=1406005957333,finishTime=1406006306049,finalStatus=SUCCEEDED 2014-07-23 17:00:02,910 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1406004413939_0001 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,910 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406004413939_0001_000001 with final state: FAILED 2014-07-23 17:00:02,910 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1406004413939_0001_000002 with final state: FINISHED 2014-07-23 17:00:02,910 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406004413939_0001_000001 State change from NEW to FAILED 2014-07-23 17:00:02,910 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406004413939_0001_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,910 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406004413939_0001 State change from NEW to FINISHED 2014-07-23 17:00:02,910 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Default priority level is set to application:application_1405935053506_0001 2014-07-23 17:00:02,910 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406004413939_0001 2014-07-23 17:00:02,911 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406004413939_0001,name=word count,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406004413939_0001/jobhistory/job/job_1406004413939_0001,appMasterHost=N/A,startTime=1406005957292,finishTime=1406006216646,finalStatus=SUCCEEDED 2014-07-23 17:00:02,911 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Recovering app: application_1405935053506_0001 with 2 attempts and final state = FINISHED 2014-07-23 17:00:02,911 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935053506_0001_000001 with final state: FAILED 2014-07-23 17:00:02,911 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Recovering attempt: appattempt_1405935053506_0001_000002 with final state: FINISHED 2014-07-23 17:00:02,911 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935053506_0001_000001 State change from NEW to FAILED 2014-07-23 17:00:02,911 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1405935053506_0001_000002 State change from NEW to FINISHED 2014-07-23 17:00:02,911 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1405935053506_0001 State change from NEW to FINISHED 2014-07-23 17:00:02,911 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1405935053506_0001 2014-07-23 17:00:02,911 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1405935053506_0001,name=Sleep job,user=testos,queue=default,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1405935053506_0001/jobhistory/job/job_1405935053506_0001,appMasterHost=N/A,startTime=1405935094185,finishTime=1405935518318,finalStatus=SUCCEEDED 2014-07-23 17:00:02,912 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager: Rolling master-key for container-tokens 2014-07-23 17:00:02,912 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Rolling master-key for amrm-tokens 2014-07-23 17:00:02,914 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Rolling master-key for nm-tokens 2014-07-23 17:00:02,914 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens 2014-07-23 17:00:02,914 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: storing master key with keyID 147 2014-07-23 17:00:02,937 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Watcher event type: NodeChildrenChanged with state:SyncConnected for path:/rmstore/ZKRMStateRoot/RMDTSecretManagerRoot/RMDTMasterKeysRoot for Service org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore in state org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: STARTED 2014-07-23 17:00:02,939 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s) 2014-07-23 17:00:02,939 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens 2014-07-23 17:00:02,939 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: storing master key with keyID 148 2014-07-23 17:00:02,946 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406002968974_0001 2014-07-23 17:00:02,946 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406030008028_0001 2014-07-23 17:00:02,946 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406030008028_0002 2014-07-23 17:00:02,946 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406106960130_0001 2014-07-23 17:00:02,946 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406007392326_0001 2014-07-23 17:00:02,946 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0001 2014-07-23 17:00:02,946 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0003 2014-07-23 17:00:02,946 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0002 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0005 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0004 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0007 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0006 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406007392326_0002 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0009 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406003865107_0001 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0008 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406003865107_0002 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0011 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0010 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406040531743_0002 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406033267767_0007 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406098935068_0006 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406033267767_0008 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406040531743_0001 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406098935068_0005 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406098935068_0004 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406098935068_0003 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406040531743_0006 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406098935068_0002 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406040531743_0005 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406098935068_0001 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406040531743_0004 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406040531743_0003 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406033267767_0001 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406033267767_0002 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406033267767_0003 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406002968974_0004 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406033267767_0004 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406033267767_0005 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406002968974_0002 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406002968974_0003 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406033267767_0006 2014-07-23 17:00:02,947 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406040253740_0001 2014-07-23 17:00:02,948 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0011 2014-07-23 17:00:02,948 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0010 2014-07-23 17:00:02,948 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0009 2014-07-23 17:00:02,948 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0008 2014-07-23 17:00:02,948 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0015 2014-07-23 17:00:02,948 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0014 2014-07-23 17:00:02,948 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0013 2014-07-23 17:00:02,948 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0012 2014-07-23 17:00:02,948 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0019 2014-07-23 17:00:02,948 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0018 2014-07-23 17:00:02,948 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0017 2014-07-23 17:00:02,948 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0016 2014-07-23 17:00:02,948 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0023 2014-07-23 17:00:02,948 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0022 2014-07-23 17:00:02,953 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1406114813957_0002 user: testos leaf-queue of parent: root #applications: 1 2014-07-23 17:00:02,989 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1406114813957_0002 from user: testos, in queue: b 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0021 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406114813957_0001 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0020 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0026 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0027 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0024 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0025 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0030 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0031 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0028 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0029 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0034 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406019383332_0001 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0035 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0032 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0033 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0038 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0036 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0037 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406091608463_0004 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406019596496_0001 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406091608463_0003 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406019596496_0002 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406091608463_0005 2014-07-23 17:00:02,990 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406019596496_0003 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406091608463_0002 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406091608463_0001 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406096149218_0002 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406096149218_0001 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406007559025_0004 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406090411375_0003 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406007559025_0003 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406090411375_0004 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406007559025_0002 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406090411375_0005 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406007559025_0001 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406090411375_0006 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406007559025_0008 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406007559025_0007 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406007559025_0006 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406090411375_0001 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406007559025_0005 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406090411375_0002 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406007559025_0012 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406007559025_0011 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406007559025_0010 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406007559025_0009 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405920868889_0008 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405920868889_0007 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405920868889_0006 2014-07-23 17:00:02,991 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405920868889_0005 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405920868889_0004 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405920868889_0003 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405920868889_0002 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405920868889_0001 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406039105488_0002 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406039105488_0001 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0005 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0004 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0007 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0006 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405932581275_0001 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0001 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0003 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935231196_0002 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406090411375_0009 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406090411375_0008 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406090411375_0007 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406031222625_0005 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406031222625_0006 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406031222625_0003 2014-07-23 17:00:02,992 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406031222625_0004 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406031222625_0001 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406031222625_0002 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406095945156_0001 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406039240027_0001 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406039240027_0002 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405934731917_0001 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0019 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406097392962_0004 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0018 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406003634132_0001 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0017 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406097392962_0002 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406003634132_0002 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0016 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406097392962_0003 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0015 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0014 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406097392962_0001 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0013 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406035038624_0012 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405950818845_0006 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405950818845_0005 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406030177639_0001 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406030177639_0002 2014-07-23 17:00:02,993 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405950818845_0004 2014-07-23 17:00:02,994 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405950818845_0003 2014-07-23 17:00:02,994 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405950818845_0002 2014-07-23 17:00:02,994 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405950818845_0001 2014-07-23 17:00:02,994 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406039240027_0004 2014-07-23 17:00:02,994 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406039240027_0003 2014-07-23 17:00:02,994 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406039240027_0005 2014-07-23 17:00:02,994 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406004413939_0002 2014-07-23 17:00:02,994 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1406004413939_0001 2014-07-23 17:00:02,994 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Couldn't find application application_1405935053506_0001 2014-07-23 17:00:02,963 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue 2014-07-23 17:00:02,995 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45018 2014-07-23 17:00:02,997 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.api.ResourceTrackerPB to the server 2014-07-23 17:00:02,997 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2014-07-23 17:00:02,997 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 45018: starting 2014-07-23 17:00:03,057 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue 2014-07-23 17:00:03,059 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115003059, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:03,066 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0005 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406035038624_0005] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1878412866_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,068 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0005 2014-07-23 17:00:03,068 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0005 2014-07-23 17:00:03,070 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406030008028_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406030008028_0001] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1878412866_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,070 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406030008028_0001 2014-07-23 17:00:03,070 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406030008028_0001 2014-07-23 17:00:03,071 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406002968974_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406002968974_0001] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1585548272_1] on [10.18.40.95] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,071 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406002968974_0001 2014-07-23 17:00:03,071 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406002968974_0001 2014-07-23 17:00:03,072 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406030008028_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406030008028_0002] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1878412866_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,072 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406030008028_0002 2014-07-23 17:00:03,072 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406030008028_0002 2014-07-23 17:00:03,071 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0007 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406035038624_0007] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,073 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0007 2014-07-23 17:00:03,073 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0007 2014-07-23 17:00:03,078 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406007392326_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406007392326_0002] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1585548272_1] on [10.18.40.95] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,078 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406007392326_0002 2014-07-23 17:00:03,079 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406007392326_0002 2014-07-23 17:00:03,079 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0003 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406035038624_0003] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1878412866_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,079 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0003 2014-07-23 17:00:03,079 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0003 2014-07-23 17:00:03,080 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0006 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406035038624_0006] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1585548272_1] on [10.18.40.95] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,080 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0006 2014-07-23 17:00:03,080 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0006 2014-07-23 17:00:03,083 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45017 2014-07-23 17:00:03,083 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406106960130_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406106960130_0001] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1878412866_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,083 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0008 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.RecoveryInProgressException): Failed to close file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406035038624_0008. Lease recovery is in progress. Try again later. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2539) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,084 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406106960130_0001 2014-07-23 17:00:03,085 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0008 2014-07-23 17:00:03,085 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406106960130_0001 2014-07-23 17:00:03,085 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0008 2014-07-23 17:00:03,095 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB to the server 2014-07-23 17:00:03,108 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2014-07-23 17:00:03,108 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 45017: starting 2014-07-23 17:00:03,124 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406040531743_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406040531743_0002] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,124 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406040531743_0002 2014-07-23 17:00:03,124 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406040531743_0002 2014-07-23 17:00:03,125 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0004 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.RecoveryInProgressException): Failed to close file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406035038624_0004. Lease recovery is in progress. Try again later. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2539) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,125 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0004 2014-07-23 17:00:03,125 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0004 2014-07-23 17:00:03,140 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406098935068_0006 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406098935068_0006] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,140 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406098935068_0006 2014-07-23 17:00:03,140 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406098935068_0006 2014-07-23 17:00:03,141 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406035038624_0001] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,141 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0010 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406035038624_0010] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1585548272_1] on [10.18.40.95] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,141 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0001 2014-07-23 17:00:03,142 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0001 2014-07-23 17:00:03,141 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406003865107_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406003865107_0001] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,141 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406033267767_0008 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406033267767_0008] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,143 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406033267767_0008 2014-07-23 17:00:03,143 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406033267767_0008 2014-07-23 17:00:03,143 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406003865107_0001 2014-07-23 17:00:03,144 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406003865107_0001 2014-07-23 17:00:03,144 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406003865107_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406003865107_0002] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,142 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406035038624_0002] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,142 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406007392326_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406007392326_0001] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1585548272_1] on [10.18.40.95] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,148 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406007392326_0001 2014-07-23 17:00:03,149 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406007392326_0001 2014-07-23 17:00:03,142 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0010 2014-07-23 17:00:03,148 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0002 2014-07-23 17:00:03,149 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0010 2014-07-23 17:00:03,149 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0002 2014-07-23 17:00:03,148 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406003865107_0002 2014-07-23 17:00:03,149 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406003865107_0002 2014-07-23 17:00:03,146 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406040531743_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406040531743_0001] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,150 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406040531743_0001 2014-07-23 17:00:03,150 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406040531743_0001 2014-07-23 17:00:03,151 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406040531743_0005 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406040531743_0005] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,152 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406040531743_0005 2014-07-23 17:00:03,153 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406098935068_0003 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406098935068_0003] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,153 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406098935068_0003 2014-07-23 17:00:03,153 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406098935068_0003 2014-07-23 17:00:03,153 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406040531743_0005 2014-07-23 17:00:03,168 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0011 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406035038624_0011] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1585548272_1] on [10.18.40.95] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,168 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0011 2014-07-23 17:00:03,168 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0011 2014-07-23 17:00:03,180 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406098935068_0004 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406098935068_0004] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,181 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406098935068_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406098935068_0001] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,181 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406098935068_0004 2014-07-23 17:00:03,181 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406098935068_0001 2014-07-23 17:00:03,181 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406098935068_0004 2014-07-23 17:00:03,181 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406098935068_0001 2014-07-23 17:00:03,190 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406033267767_0005 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406033267767_0005] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,191 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406033267767_0005 2014-07-23 17:00:03,191 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406033267767_0005 2014-07-23 17:00:03,200 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406098935068_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406098935068_0002] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,201 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406098935068_0002 2014-07-23 17:00:03,201 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406098935068_0002 2014-07-23 17:00:03,202 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406002968974_0003 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406002968974_0003] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1585548272_1] on [10.18.40.95] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,202 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406002968974_0003 2014-07-23 17:00:03,203 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406002968974_0003 2014-07-23 17:00:03,206 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0009 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406035038624_0009] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,207 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0009 2014-07-23 17:00:03,207 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0009 2014-07-23 17:00:03,208 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406040531743_0003 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406040531743_0003] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,208 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406040531743_0003 2014-07-23 17:00:03,208 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406040531743_0003 2014-07-23 17:00:03,209 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0017 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0017] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,210 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0017 2014-07-23 17:00:03,210 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0017 2014-07-23 17:00:03,211 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0015 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0015] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,211 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0015 2014-07-23 17:00:03,211 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0015 2014-07-23 17:00:03,212 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406098935068_0005 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406098935068_0005] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,213 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406098935068_0005 2014-07-23 17:00:03,213 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406098935068_0005 2014-07-23 17:00:03,215 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406002968974_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406002968974_0002] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1878412866_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,215 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406002968974_0002 2014-07-23 17:00:03,216 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406002968974_0002 2014-07-23 17:00:03,216 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0016 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0016] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,217 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0016 2014-07-23 17:00:03,217 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0016 2014-07-23 17:00:03,218 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406040531743_0006 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406040531743_0006] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,218 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406040531743_0006 2014-07-23 17:00:03,218 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406040531743_0006 2014-07-23 17:00:03,220 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406033267767_0006 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406033267767_0006] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,220 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406033267767_0006 2014-07-23 17:00:03,220 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406033267767_0006 2014-07-23 17:00:03,221 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406033267767_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406033267767_0001] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,221 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406033267767_0001 2014-07-23 17:00:03,221 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406033267767_0001 2014-07-23 17:00:03,206 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue 2014-07-23 17:00:03,212 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406033267767_0003 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406033267767_0003] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,208 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406033267767_0007 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406033267767_0007] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,222 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406033267767_0003 2014-07-23 17:00:03,222 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406033267767_0007 2014-07-23 17:00:03,222 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406033267767_0003 2014-07-23 17:00:03,222 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406033267767_0007 2014-07-23 17:00:03,225 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0027 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0027] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,226 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0027 2014-07-23 17:00:03,226 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0027 2014-07-23 17:00:03,226 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406040531743_0004 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406040531743_0004] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,226 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406040531743_0004 2014-07-23 17:00:03,226 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406040531743_0004 2014-07-23 17:00:03,227 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0025 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0025] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,227 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0025 2014-07-23 17:00:03,227 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0025 2014-07-23 17:00:03,237 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45022 2014-07-23 17:00:03,237 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0012 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0012] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,242 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0012 2014-07-23 17:00:03,242 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0012 2014-07-23 17:00:03,243 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0013 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0013] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,243 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0013 2014-07-23 17:00:03,243 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0013 2014-07-23 17:00:03,244 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0008 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0008] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1585548272_1] on [10.18.40.95] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,245 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0008 2014-07-23 17:00:03,245 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0008 2014-07-23 17:00:03,245 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationClientProtocolPB to the server 2014-07-23 17:00:03,252 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406033267767_0004 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to create file [/home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406033267767_0004] for [DFSClient_NONMAPREDUCE_-903472038_1] for client [10.18.40.84], because this file is already being created by [DFSClient_NONMAPREDUCE_1925787811_1] on [10.18.40.84] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2549) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,252 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406033267767_0004 2014-07-23 17:00:03,252 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406033267767_0004 2014-07-23 17:00:03,253 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0010 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0010 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,253 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0010 2014-07-23 17:00:03,253 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0010 2014-07-23 17:00:03,254 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0026 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0026 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,254 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0026 2014-07-23 17:00:03,254 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0026 2014-07-23 17:00:03,255 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0022 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0022 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,256 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0022 2014-07-23 17:00:03,256 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0022 2014-07-23 17:00:03,255 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0037 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0037 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,255 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406033267767_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406033267767_0002 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,256 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0037 2014-07-23 17:00:03,257 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0037 2014-07-23 17:00:03,256 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0035 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0035 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,257 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406033267767_0002 2014-07-23 17:00:03,257 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406033267767_0002 2014-07-23 17:00:03,257 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0035 2014-07-23 17:00:03,257 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0035 2014-07-23 17:00:03,259 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0018 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0018 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,259 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0018 2014-07-23 17:00:03,260 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0018 2014-07-23 17:00:03,260 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0023 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0023 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,260 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0023 2014-07-23 17:00:03,260 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0023 2014-07-23 17:00:03,262 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406040253740_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406040253740_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,262 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406040253740_0001 2014-07-23 17:00:03,262 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406040253740_0001 2014-07-23 17:00:03,263 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2014-07-23 17:00:03,266 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0036 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0036 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,267 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0036 2014-07-23 17:00:03,267 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0036 2014-07-23 17:00:03,267 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0020 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0020 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,267 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0020 2014-07-23 17:00:03,267 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0020 2014-07-23 17:00:03,269 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 45022: starting 2014-07-23 17:00:03,270 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406019596496_0003 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406019596496_0003 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,271 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406019596496_0003 2014-07-23 17:00:03,271 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0032 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0032 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,271 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0032 2014-07-23 17:00:03,271 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0032 2014-07-23 17:00:03,271 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406019596496_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406019596496_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,272 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406019596496_0001 2014-07-23 17:00:03,272 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406019596496_0001 2014-07-23 17:00:03,271 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406019596496_0003 2014-07-23 17:00:03,273 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406019383332_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406019383332_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,273 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406019383332_0001 2014-07-23 17:00:03,274 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406019383332_0001 2014-07-23 17:00:03,277 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0011 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0011 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,339 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0011 2014-07-23 17:00:03,339 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0011 2014-07-23 17:00:03,339 INFO com.huawei.hadoop.datasight.RMAppTimeOutService: Successfully started RMAppTimeOutThread 2014-07-23 17:00:03,339 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to active state 2014-07-23 17:00:03,301 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0033 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0033 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,300 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406090411375_0003 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406090411375_0003 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,342 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406090411375_0003 2014-07-23 17:00:03,342 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406090411375_0003 2014-07-23 17:00:03,297 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406090411375_0006 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406090411375_0006 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,295 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406019596496_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406019596496_0002 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,343 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406019596496_0002 2014-07-23 17:00:03,343 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406019596496_0002 2014-07-23 17:00:03,292 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0030 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0030 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,344 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0030 2014-07-23 17:00:03,344 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0030 2014-07-23 17:00:03,348 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0021 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0021 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,348 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0021 2014-07-23 17:00:03,349 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0021 2014-07-23 17:00:03,349 INFO org.apache.hadoop.conf.Configuration: found resource capacity-scheduler.xml at file:/home/testos/july21/hadoop/etc/hadoop/capacity-scheduler.xml 2014-07-23 17:00:03,351 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406007559025_0005 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406007559025_0005 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,351 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406007559025_0005 2014-07-23 17:00:03,351 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406007559025_0005 2014-07-23 17:00:03,352 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406091608463_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406091608463_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,353 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406091608463_0001 2014-07-23 17:00:03,353 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406091608463_0001 2014-07-23 17:00:03,353 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406091608463_0005 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406091608463_0005 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,353 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406091608463_0005 2014-07-23 17:00:03,353 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406091608463_0005 2014-07-23 17:00:03,344 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 17:00:03,343 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406090411375_0006 2014-07-23 17:00:03,341 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0033 2014-07-23 17:00:03,360 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0033 2014-07-23 17:00:03,360 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406090411375_0006 2014-07-23 17:00:03,362 WARN com.huawei.hadoop.datasight.RMAppTimeOutService: Execution expiry time is not set properly. Proceeding with 5 mins. 2014-07-23 17:00:03,396 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0031 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0031 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,396 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0031 2014-07-23 17:00:03,396 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0031 2014-07-23 17:00:03,405 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406096149218_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406096149218_0002 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,405 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406039105488_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406039105488_0002 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,405 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406096149218_0002 2014-07-23 17:00:03,406 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406096149218_0002 2014-07-23 17:00:03,406 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406007559025_0008 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406007559025_0008 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,406 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406007559025_0008 2014-07-23 17:00:03,406 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406007559025_0008 2014-07-23 17:00:03,406 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406039105488_0002 2014-07-23 17:00:03,407 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406039105488_0002 2014-07-23 17:00:03,407 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406090411375_0004 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406090411375_0004 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,408 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406090411375_0004 2014-07-23 17:00:03,408 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406090411375_0004 2014-07-23 17:00:03,409 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406007559025_0009 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406007559025_0009 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,409 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406007559025_0009 2014-07-23 17:00:03,409 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406007559025_0009 2014-07-23 17:00:03,409 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406007559025_0004 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406007559025_0004 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Re-initializing queues... 2014-07-23 17:00:03,481 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406007559025_0004 2014-07-23 17:00:03,481 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406007559025_0004 2014-07-23 17:00:03,481 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406007559025_0003 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406007559025_0003 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,482 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406007559025_0006 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406007559025_0006 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,482 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: root, capacity=1.0, asboluteCapacity=1.0, maxCapacity=1.0, asboluteMaxCapacity=1.0, state=RUNNING, acls=ADMINISTER_QUEUE:*SUBMIT_APPLICATIONS:* 2014-07-23 17:00:03,483 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Initialized parent-queue root name=root, fullname=root 2014-07-23 17:00:03,484 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Initializing a capacity = 0.8 [= (float) configuredCapacity / 100 ] asboluteCapacity = 0.8 [= parentAbsoluteCapacity * capacity ] maxCapacity = 1.0 [= configuredMaxCapacity ] absoluteMaxCapacity = 1.0 [= 1.0 maximumCapacity undefined, (parentAbsoluteMaxCapacity * maximumCapacity) / 100 otherwise ] userLimit = 100 [= configuredUserLimit ] userLimitFactor = 2.0 [= configuredUserLimitFactor ] maxApplications = 8000 [= configuredMaximumSystemApplicationsPerQueue or (int)(configuredMaximumSystemApplications * absoluteCapacity)] maxApplicationsPerUser = 16000 [= (int)(maxApplications * (userLimit / 100.0f) * userLimitFactor) ] maxActiveApplications = 1 [= max((int)ceil((clusterResourceMemory / minimumAllocation) * maxAMResourcePerQueuePercent * absoluteMaxCapacity),1) ] maxActiveAppsUsingAbsCap = 1 [= max((int)ceil((clusterResourceMemory / minimumAllocation) *maxAMResourcePercent * absoluteCapacity),1) ] maxActiveApplicationsPerUser = 2 [= max((int)(maxActiveApplications * (userLimit / 100.0f) * userLimitFactor),1) ] usedCapacity = 0.0 [= usedResourcesMemory / (clusterResourceMemory * absoluteCapacity)] absoluteUsedCapacity = 0.0 [= usedResourcesMemory / clusterResourceMemory] maxAMResourcePerQueuePercent = 0.4 [= configuredMaximumAMResourcePercent ] minimumAllocationFactor = 0.875 [= (float)(maximumAllocationMemory - minimumAllocationMemory) / maximumAllocationMemory ] numContainers = 0 [= currentNumContainers ] state = RUNNING [= configuredState ] acls = ADMINISTER_QUEUE: SUBMIT_APPLICATIONS: [= configuredAcls ] nodeLocalityDelay = 40 2014-07-23 17:00:03,484 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized queue: a: capacity=0.8, absoluteCapacity=0.8, usedResources=, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=0, numContainers=0 2014-07-23 17:00:03,484 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Initializing b capacity = 0.2 [= (float) configuredCapacity / 100 ] asboluteCapacity = 0.2 [= parentAbsoluteCapacity * capacity ] maxCapacity = 1.0 [= configuredMaxCapacity ] absoluteMaxCapacity = 1.0 [= 1.0 maximumCapacity undefined, (parentAbsoluteMaxCapacity * maximumCapacity) / 100 otherwise ] userLimit = 100 [= configuredUserLimit ] userLimitFactor = 2.0 [= configuredUserLimitFactor ] maxApplications = 2000 [= configuredMaximumSystemApplicationsPerQueue or (int)(configuredMaximumSystemApplications * absoluteCapacity)] maxApplicationsPerUser = 4000 [= (int)(maxApplications * (userLimit / 100.0f) * userLimitFactor) ] maxActiveApplications = 1 [= max((int)ceil((clusterResourceMemory / minimumAllocation) * maxAMResourcePerQueuePercent * absoluteMaxCapacity),1) ] maxActiveAppsUsingAbsCap = 1 [= max((int)ceil((clusterResourceMemory / minimumAllocation) *maxAMResourcePercent * absoluteCapacity),1) ] maxActiveApplicationsPerUser = 2 [= max((int)(maxActiveApplications * (userLimit / 100.0f) * userLimitFactor),1) ] usedCapacity = 0.0 [= usedResourcesMemory / (clusterResourceMemory * absoluteCapacity)] absoluteUsedCapacity = 0.0 [= usedResourcesMemory / clusterResourceMemory] maxAMResourcePerQueuePercent = 0.4 [= configuredMaximumAMResourcePercent ] minimumAllocationFactor = 0.875 [= (float)(maximumAllocationMemory - minimumAllocationMemory) / maximumAllocationMemory ] numContainers = 0 [= currentNumContainers ] state = RUNNING [= configuredState ] acls = ADMINISTER_QUEUE:*SUBMIT_APPLICATIONS: [= configuredAcls ] nodeLocalityDelay = 40 2014-07-23 17:00:03,484 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized queue: b: capacity=0.2, absoluteCapacity=0.2, usedResources=, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=0, numContainers=0 2014-07-23 17:00:03,484 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized queue: root: numChildQueue= 2, capacity=1.0, absoluteCapacity=1.0, usedResources=usedCapacity=0.0, numApps=0, numContainers=0 2014-07-23 17:00:03,484 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: root, capacity=1.0, asboluteCapacity=1.0, maxCapacity=1.0, asboluteMaxCapacity=1.0, state=RUNNING, acls=ADMINISTER_QUEUE:*SUBMIT_APPLICATIONS:* 2014-07-23 17:00:03,485 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Initializing b capacity = 0.2 [= (float) configuredCapacity / 100 ] asboluteCapacity = 0.2 [= parentAbsoluteCapacity * capacity ] maxCapacity = 1.0 [= configuredMaxCapacity ] absoluteMaxCapacity = 1.0 [= 1.0 maximumCapacity undefined, (parentAbsoluteMaxCapacity * maximumCapacity) / 100 otherwise ] userLimit = 100 [= configuredUserLimit ] userLimitFactor = 2.0 [= configuredUserLimitFactor ] maxApplications = 2000 [= configuredMaximumSystemApplicationsPerQueue or (int)(configuredMaximumSystemApplications * absoluteCapacity)] maxApplicationsPerUser = 4000 [= (int)(maxApplications * (userLimit / 100.0f) * userLimitFactor) ] maxActiveApplications = 1 [= max((int)ceil((clusterResourceMemory / minimumAllocation) * maxAMResourcePerQueuePercent * absoluteMaxCapacity),1) ] maxActiveAppsUsingAbsCap = 1 [= max((int)ceil((clusterResourceMemory / minimumAllocation) *maxAMResourcePercent * absoluteCapacity),1) ] maxActiveApplicationsPerUser = 2 [= max((int)(maxActiveApplications * (userLimit / 100.0f) * userLimitFactor),1) ] usedCapacity = 0.0 [= usedResourcesMemory / (clusterResourceMemory * absoluteCapacity)] absoluteUsedCapacity = 0.0 [= usedResourcesMemory / clusterResourceMemory] maxAMResourcePerQueuePercent = 0.4 [= configuredMaximumAMResourcePercent ] minimumAllocationFactor = 0.875 [= (float)(maximumAllocationMemory - minimumAllocationMemory) / maximumAllocationMemory ] numContainers = 0 [= currentNumContainers ] state = RUNNING [= configuredState ] acls = ADMINISTER_QUEUE:*SUBMIT_APPLICATIONS: [= configuredAcls ] nodeLocalityDelay = 40 2014-07-23 17:00:03,485 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: root: re-configured queue: b: capacity=0.2, absoluteCapacity=0.2, usedResources=, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=0, numContainers=0 2014-07-23 17:00:03,485 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Initializing a capacity = 0.8 [= (float) configuredCapacity / 100 ] asboluteCapacity = 0.8 [= parentAbsoluteCapacity * capacity ] maxCapacity = 1.0 [= configuredMaxCapacity ] absoluteMaxCapacity = 1.0 [= 1.0 maximumCapacity undefined, (parentAbsoluteMaxCapacity * maximumCapacity) / 100 otherwise ] userLimit = 100 [= configuredUserLimit ] userLimitFactor = 2.0 [= configuredUserLimitFactor ] maxApplications = 8000 [= configuredMaximumSystemApplicationsPerQueue or (int)(configuredMaximumSystemApplications * absoluteCapacity)] maxApplicationsPerUser = 16000 [= (int)(maxApplications * (userLimit / 100.0f) * userLimitFactor) ] maxActiveApplications = 1 [= max((int)ceil((clusterResourceMemory / minimumAllocation) * maxAMResourcePerQueuePercent * absoluteMaxCapacity),1) ] maxActiveAppsUsingAbsCap = 1 [= max((int)ceil((clusterResourceMemory / minimumAllocation) *maxAMResourcePercent * absoluteCapacity),1) ] maxActiveApplicationsPerUser = 2 [= max((int)(maxActiveApplications * (userLimit / 100.0f) * userLimitFactor),1) ] usedCapacity = 0.0 [= usedResourcesMemory / (clusterResourceMemory * absoluteCapacity)] absoluteUsedCapacity = 0.0 [= usedResourcesMemory / clusterResourceMemory] maxAMResourcePerQueuePercent = 0.4 [= configuredMaximumAMResourcePercent ] minimumAllocationFactor = 0.875 [= (float)(maximumAllocationMemory - minimumAllocationMemory) / maximumAllocationMemory ] numContainers = 0 [= currentNumContainers ] state = RUNNING [= configuredState ] acls = ADMINISTER_QUEUE: SUBMIT_APPLICATIONS: [= configuredAcls ] nodeLocalityDelay = 40 2014-07-23 17:00:03,485 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: root: re-configured queue: a: capacity=0.8, absoluteCapacity=0.8, usedResources=, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=0, numContainers=0 2014-07-23 17:00:03,485 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=refreshQueues TARGET=AdminService RESULT=SUCCESS 2014-07-23 17:00:03,483 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406007559025_0006 2014-07-23 17:00:03,490 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406007559025_0006 2014-07-23 17:00:03,482 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405920868889_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405920868889_0002 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,491 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405920868889_0002 2014-07-23 17:00:03,491 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405920868889_0002 2014-07-23 17:00:03,482 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406007559025_0003 2014-07-23 17:00:03,494 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406007559025_0003 2014-07-23 17:00:03,497 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406002968974_0004 java.io.IOException: Output file not at zero offset. at org.apache.hadoop.io.file.tfile.BCFile$Writer.(BCFile.java:288) at org.apache.hadoop.io.file.tfile.TFile$Writer.(TFile.java:288) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:728) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,498 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406002968974_0004 2014-07-23 17:00:03,498 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406002968974_0004 2014-07-23 17:00:03,499 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0002 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,499 INFO org.apache.hadoop.conf.Configuration: found resource yarn-site.xml at file:/home/testos/july21/hadoop/etc/hadoop/yarn-site.xml 2014-07-23 17:00:03,500 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0002 2014-07-23 17:00:03,500 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0002 2014-07-23 17:00:03,502 INFO org.apache.hadoop.util.HostsFileReader: Setting the includes file to 2014-07-23 17:00:03,502 INFO org.apache.hadoop.util.HostsFileReader: Setting the excludes file to 2014-07-23 17:00:03,502 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list 2014-07-23 17:00:03,502 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=refreshNodes TARGET=AdminService RESULT=SUCCESS 2014-07-23 17:00:03,509 INFO org.apache.hadoop.conf.Configuration: found resource core-site.xml at file:/home/testos/july21/hadoop/etc/hadoop/core-site.xml 2014-07-23 17:00:03,509 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405920868889_0003 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405920868889_0003 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,510 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405920868889_0003 2014-07-23 17:00:03,510 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405920868889_0003 2014-07-23 17:00:03,510 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=refreshSuperUserGroupsConfiguration TARGET=AdminService RESULT=SUCCESS 2014-07-23 17:00:03,514 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406090411375_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406090411375_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,514 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0003 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0003 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,515 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0003 2014-07-23 17:00:03,515 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0005 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0005 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,515 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0005 2014-07-23 17:00:03,515 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0005 2014-07-23 17:00:03,514 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406090411375_0001 2014-07-23 17:00:03,516 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406090411375_0001 2014-07-23 17:00:03,517 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0014 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0014 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,517 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0014 2014-07-23 17:00:03,515 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0003 2014-07-23 17:00:03,517 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0014 2014-07-23 17:00:03,519 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0006 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0006 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,519 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0006 2014-07-23 17:00:03,519 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0006 2014-07-23 17:00:03,523 INFO org.apache.hadoop.conf.Configuration: found resource core-site.xml at file:/home/testos/july21/hadoop/etc/hadoop/core-site.xml 2014-07-23 17:00:03,523 INFO org.apache.hadoop.security.Groups: clearing userToGroupsMap cache 2014-07-23 17:00:03,523 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=refreshUserToGroupsMappings TARGET=AdminService RESULT=SUCCESS 2014-07-23 17:00:03,523 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406090411375_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406090411375_0002 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,524 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406090411375_0002 2014-07-23 17:00:03,524 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406090411375_0002 2014-07-23 17:00:03,525 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406031222625_0006 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406031222625_0006 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,525 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406031222625_0006 2014-07-23 17:00:03,525 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406031222625_0006 2014-07-23 17:00:03,526 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405920868889_0007 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405920868889_0007 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,526 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405920868889_0007 2014-07-23 17:00:03,526 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405920868889_0007 2014-07-23 17:00:03,527 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0024 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0024 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,527 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0024 2014-07-23 17:00:03,528 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0024 2014-07-23 17:00:03,532 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406031222625_0003 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406031222625_0003 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,533 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406031222625_0003 2014-07-23 17:00:03,533 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406031222625_0003 2014-07-23 17:00:03,550 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=transitionToActive TARGET=RMHAProtocolService RESULT=SUCCESS 2014-07-23 17:00:03,565 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0009 java.io.IOException: Output file not at zero offset. at org.apache.hadoop.io.file.tfile.BCFile$Writer.(BCFile.java:288) at org.apache.hadoop.io.file.tfile.TFile$Writer.(TFile.java:288) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:728) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,565 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0009 2014-07-23 17:00:03,565 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406091608463_0002 java.io.IOException: Output file not at zero offset. at org.apache.hadoop.io.file.tfile.BCFile$Writer.(BCFile.java:288) at org.apache.hadoop.io.file.tfile.TFile$Writer.(TFile.java:288) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:728) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,565 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406091608463_0002 2014-07-23 17:00:03,565 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406091608463_0002 2014-07-23 17:00:03,565 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406114813957_0001 java.io.IOException: Output file not at zero offset. at org.apache.hadoop.io.file.tfile.BCFile$Writer.(BCFile.java:288) at org.apache.hadoop.io.file.tfile.TFile$Writer.(TFile.java:288) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:728) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,565 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0009 2014-07-23 17:00:03,566 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406114813957_0001 2014-07-23 17:00:03,566 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406114813957_0001 2014-07-23 17:00:03,568 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406090411375_0007 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406090411375_0007 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,569 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406090411375_0007 2014-07-23 17:00:03,569 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406031222625_0004 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406031222625_0004 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,569 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406090411375_0007 2014-07-23 17:00:03,569 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406031222625_0004 2014-07-23 17:00:03,570 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406031222625_0004 2014-07-23 17:00:03,572 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406097392962_0003 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406097392962_0003 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,573 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406097392962_0003 2014-07-23 17:00:03,573 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406097392962_0003 2014-07-23 17:00:03,573 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406031222625_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406031222625_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,574 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406031222625_0001 2014-07-23 17:00:03,574 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406031222625_0001 2014-07-23 17:00:03,574 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405920868889_0008 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405920868889_0008 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,575 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405920868889_0008 2014-07-23 17:00:03,575 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405920868889_0008 2014-07-23 17:00:03,579 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0034 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0034 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,579 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0034 2014-07-23 17:00:03,579 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0034 2014-07-23 17:00:03,580 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406030177639_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406030177639_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,580 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406030177639_0001 2014-07-23 17:00:03,580 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406030177639_0001 2014-07-23 17:00:03,581 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406007559025_0010 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406007559025_0010 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,581 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406007559025_0010 2014-07-23 17:00:03,581 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406007559025_0010 2014-07-23 17:00:03,582 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0019 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0019 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,582 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0019 2014-07-23 17:00:03,582 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0019 2014-07-23 17:00:03,584 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0028 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0028 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,584 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0028 2014-07-23 17:00:03,584 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0028 2014-07-23 17:00:03,587 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406097392962_0004 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406097392962_0004 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,587 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406097392962_0004 2014-07-23 17:00:03,587 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406097392962_0004 2014-07-23 17:00:03,587 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406097392962_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406097392962_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,588 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406097392962_0001 2014-07-23 17:00:03,588 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406097392962_0001 2014-07-23 17:00:03,592 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0013 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406035038624_0013 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,592 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0013 2014-07-23 17:00:03,593 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0018 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406035038624_0018 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,593 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406039105488_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406039105488_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,593 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406090411375_0005 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406090411375_0005 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,593 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0013 2014-07-23 17:00:03,593 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406090411375_0005 2014-07-23 17:00:03,593 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406039105488_0001 2014-07-23 17:00:03,593 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406090411375_0005 2014-07-23 17:00:03,594 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406039105488_0001 2014-07-23 17:00:03,593 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0018 2014-07-23 17:00:03,594 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0018 2014-07-23 17:00:03,595 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406039240027_0005 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406039240027_0005 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,596 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406039240027_0005 2014-07-23 17:00:03,596 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406039240027_0005 2014-07-23 17:00:03,597 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405920868889_0004 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405920868889_0004 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,597 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405920868889_0004 2014-07-23 17:00:03,597 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405920868889_0004 2014-07-23 17:00:03,598 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406030177639_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406030177639_0002 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,598 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0038 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0038 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,598 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406030177639_0002 2014-07-23 17:00:03,598 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0038 2014-07-23 17:00:03,598 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406030177639_0002 2014-07-23 17:00:03,598 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0038 2014-07-23 17:00:03,603 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406003634132_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406003634132_0002 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,603 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406003634132_0002 2014-07-23 17:00:03,603 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406003634132_0002 2014-07-23 17:00:03,604 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,604 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0001 2014-07-23 17:00:03,604 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0001 2014-07-23 17:00:03,605 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0007 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0007 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,605 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406007559025_0007 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406007559025_0007 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,605 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406007559025_0007 2014-07-23 17:00:03,605 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0007 2014-07-23 17:00:03,605 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406007559025_0007 2014-07-23 17:00:03,605 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0007 2014-07-23 17:00:03,610 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405950818845_0005 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405950818845_0005 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,611 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405950818845_0005 2014-07-23 17:00:03,611 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405950818845_0005 2014-07-23 17:00:03,612 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406031222625_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406031222625_0002 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,612 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406031222625_0002 2014-07-23 17:00:03,613 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406031222625_0002 2014-07-23 17:00:03,613 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405920868889_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405920868889_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,613 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405920868889_0001 2014-07-23 17:00:03,613 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405920868889_0001 2014-07-23 17:00:03,614 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406090411375_0008 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406090411375_0008 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,614 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406090411375_0008 2014-07-23 17:00:03,614 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406090411375_0008 2014-07-23 17:00:03,619 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406039240027_0003 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406039240027_0003 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,619 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406039240027_0003 2014-07-23 17:00:03,619 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406039240027_0003 2014-07-23 17:00:03,620 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406095945156_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406095945156_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,620 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406095945156_0001 2014-07-23 17:00:03,620 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406095945156_0001 2014-07-23 17:00:03,621 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0004 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0004 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,621 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0004 2014-07-23 17:00:03,621 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0004 2014-07-23 17:00:03,622 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0015 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406035038624_0015 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,622 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0015 2014-07-23 17:00:03,622 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0015 2014-07-23 17:00:03,628 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0019 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406035038624_0019 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,629 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0019 2014-07-23 17:00:03,629 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0019 2014-07-23 17:00:03,629 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406004413939_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406004413939_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,629 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405932581275_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405932581275_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,630 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406004413939_0001 2014-07-23 17:00:03,630 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405932581275_0001 2014-07-23 17:00:03,630 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406004413939_0001 2014-07-23 17:00:03,630 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405950818845_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405950818845_0002 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,630 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405932581275_0001 2014-07-23 17:00:03,630 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405950818845_0002 2014-07-23 17:00:03,630 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405950818845_0002 2014-07-23 17:00:03,636 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405950818845_0006 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405950818845_0006 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,637 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405950818845_0006 2014-07-23 17:00:03,637 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405950818845_0006 2014-07-23 17:00:03,637 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406031222625_0005 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406031222625_0005 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,637 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406031222625_0005 2014-07-23 17:00:03,638 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406031222625_0005 2014-07-23 17:00:03,638 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935053506_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935053506_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,638 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935053506_0001 2014-07-23 17:00:03,638 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935053506_0001 2014-07-23 17:00:03,644 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406039240027_0004 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406039240027_0004 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,645 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406039240027_0004 2014-07-23 17:00:03,645 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406039240027_0004 2014-07-23 17:00:03,647 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406097392962_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406097392962_0002 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,647 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406097392962_0002 2014-07-23 17:00:03,647 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406097392962_0002 2014-07-23 17:00:03,653 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406004413939_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406004413939_0002 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,654 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406004413939_0002 2014-07-23 17:00:03,654 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406004413939_0002 2014-07-23 17:00:03,674 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406114813957_0002 java.io.IOException: Output file not at zero offset. at org.apache.hadoop.io.file.tfile.BCFile$Writer.(BCFile.java:288) at org.apache.hadoop.io.file.tfile.TFile$Writer.(TFile.java:288) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:728) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,674 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406114813957_0002 2014-07-23 17:00:03,682 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405935231196_0029 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405935231196_0029 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,683 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405935231196_0029 2014-07-23 17:00:03,683 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405935231196_0029 2014-07-23 17:00:03,690 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406091608463_0003 java.io.IOException: Output file not at zero offset. at org.apache.hadoop.io.file.tfile.BCFile$Writer.(BCFile.java:288) at org.apache.hadoop.io.file.tfile.TFile$Writer.(TFile.java:288) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:728) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,690 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406091608463_0003 2014-07-23 17:00:03,691 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406091608463_0003 2014-07-23 17:00:03,692 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0014 java.io.IOException: Output file not at zero offset. at org.apache.hadoop.io.file.tfile.BCFile$Writer.(BCFile.java:288) at org.apache.hadoop.io.file.tfile.TFile$Writer.(TFile.java:288) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:728) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,692 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0014 2014-07-23 17:00:03,692 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0014 2014-07-23 17:00:03,700 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406007559025_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406007559025_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,700 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406007559025_0001 2014-07-23 17:00:03,700 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406007559025_0001 2014-07-23 17:00:03,701 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405950818845_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405950818845_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,701 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405950818845_0001 2014-07-23 17:00:03,701 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405950818845_0001 2014-07-23 17:00:03,706 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406007559025_0011 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406007559025_0011 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,706 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406007559025_0011 2014-07-23 17:00:03,707 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406007559025_0011 2014-07-23 17:00:03,707 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0012 java.io.IOException: Output file not at zero offset. at org.apache.hadoop.io.file.tfile.BCFile$Writer.(BCFile.java:288) at org.apache.hadoop.io.file.tfile.TFile$Writer.(TFile.java:288) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:728) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,707 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0012 2014-07-23 17:00:03,707 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0012 2014-07-23 17:00:03,714 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405920868889_0005 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405920868889_0005 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,715 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405920868889_0005 2014-07-23 17:00:03,715 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405920868889_0005 2014-07-23 17:00:03,720 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406090411375_0009 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406090411375_0009 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,721 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406090411375_0009 2014-07-23 17:00:03,721 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406090411375_0009 2014-07-23 17:00:03,724 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406091608463_0004 java.io.IOException: Output file not at zero offset. at org.apache.hadoop.io.file.tfile.BCFile$Writer.(BCFile.java:288) at org.apache.hadoop.io.file.tfile.TFile$Writer.(TFile.java:288) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:728) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,724 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406091608463_0004 2014-07-23 17:00:03,724 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406091608463_0004 2014-07-23 17:00:03,728 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406039240027_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406039240027_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,728 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406039240027_0001 2014-07-23 17:00:03,728 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406039240027_0001 2014-07-23 17:00:03,732 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406096149218_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406096149218_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,732 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406096149218_0001 2014-07-23 17:00:03,732 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406096149218_0001 2014-07-23 17:00:03,736 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405934731917_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405934731917_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,736 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405934731917_0001 2014-07-23 17:00:03,737 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405934731917_0001 2014-07-23 17:00:03,740 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406007559025_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406007559025_0002 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,741 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406007559025_0002 2014-07-23 17:00:03,741 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406007559025_0002 2014-07-23 17:00:03,744 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0016 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406035038624_0016 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,745 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0016 2014-07-23 17:00:03,745 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0016 2014-07-23 17:00:03,749 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406007559025_0012 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406007559025_0012 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,749 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406007559025_0012 2014-07-23 17:00:03,749 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406007559025_0012 2014-07-23 17:00:03,753 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405950818845_0003 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405950818845_0003 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,754 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405950818845_0003 2014-07-23 17:00:03,754 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405950818845_0003 2014-07-23 17:00:03,757 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405920868889_0006 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405920868889_0006 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,757 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405920868889_0006 2014-07-23 17:00:03,757 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405920868889_0006 2014-07-23 17:00:03,764 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406039240027_0002 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406039240027_0002 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,764 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406039240027_0002 2014-07-23 17:00:03,764 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406039240027_0002 2014-07-23 17:00:03,771 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406003634132_0001 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406003634132_0001 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,771 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406003634132_0001 2014-07-23 17:00:03,771 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406003634132_0001 2014-07-23 17:00:03,777 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1406035038624_0017 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1406035038624_0017 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,777 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1406035038624_0017 2014-07-23 17:00:03,777 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406035038624_0017 2014-07-23 17:00:03,786 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore: Error when openning history file of application application_1405950818845_0004 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /home/testos/timelinedata/generic-history/ApplicationHistoryDataRoot/application_1405950818845_0004 for DFSClient_NONMAPREDUCE_-903472038_1 for client 10.18.40.84 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2502) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2378) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2613) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2576) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:537) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy14.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:276) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at $Proxy15.append(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1569) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1609) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1597) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:316) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore$HistoryFileWriter.(FileSystemApplicationHistoryStore.java:723) at org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore.applicationStarted(FileSystemApplicationHistoryStore.java:418) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter.handleWritingApplicationHistoryEvent(RMApplicationHistoryWriter.java:140) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:297) at org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler.handle(RMApplicationHistoryWriter.java:292) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) at java.lang.Thread.run(Thread.java:662) 2014-07-23 17:00:03,786 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application application_1405950818845_0004 2014-07-23 17:00:03,786 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1405950818845_0004 2014-07-23 17:00:06,060 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115006060, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:09,061 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115009061, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:12,061 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115012061, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:15,062 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115015062, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:18,063 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115018063, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:21,063 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115021063, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:24,064 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115024064, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:27,065 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115027065, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:30,065 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115030065, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:33,066 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115033066, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:36,067 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115036067, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:39,067 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115039067, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:42,068 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115042068, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:45,069 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115045069, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:48,069 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115048069, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:51,070 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115051070, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:53,420 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: Node not found resyncing HOST-10-18-40-84:45026 2014-07-23 17:00:54,071 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115054071, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 2014-07-23 17:00:54,435 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: received container statuses on node manager register :[ContainerStatus: [ContainerId: container_1406114813957_0002_01_000013, State: COMPLETE, Diagnostics: Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 , ExitStatus: 143, ]] 2014-07-23 17:00:54,448 INFO org.apache.hadoop.yarn.util.RackResolver: Resolved HOST-10-18-40-84 to /default-rack 2014-07-23 17:00:54,452 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: NodeManager from node HOST-10-18-40-84(cmPort: 45026 httpPort: 45025) registered with capability: , assigned nodeId HOST-10-18-40-84:45026 2014-07-23 17:00:54,454 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: HOST-10-18-40-84:45026 Node Transitioned from NEW to RUNNING 2014-07-23 17:00:54,455 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added node HOST-10-18-40-84:45026 clusterResource: 2014-07-23 17:00:54,465 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed... 2014-07-23 17:00:57,072 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115057072, a, 0, 0, 0, 0, 8192, 6, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 2048, 1, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:00,025 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: Node not found resyncing HOST-10-18-40-26:45026 2014-07-23 17:01:00,073 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115060073, a, 0, 0, 0, 0, 8192, 6, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 2048, 1, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:01,034 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: received container statuses on node manager register :[ContainerStatus: [ContainerId: container_1406114813957_0002_01_000001, State: COMPLETE, Diagnostics: Container Killed by ResourceManager Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 , ExitStatus: 143, ], ContainerStatus: [ContainerId: container_1406114813957_0002_01_000002, State: COMPLETE, Diagnostics: Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 , ExitStatus: 143, ], ContainerStatus: [ContainerId: container_1406114813957_0002_01_000003, State: COMPLETE, Diagnostics: Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 , ExitStatus: 143, ], ContainerStatus: [ContainerId: container_1406114813957_0002_01_000004, State: COMPLETE, Diagnostics: Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 , ExitStatus: 143, ], ContainerStatus: [ContainerId: container_1406114813957_0002_01_000005, State: COMPLETE, Diagnostics: Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 , ExitStatus: 143, ], ContainerStatus: [ContainerId: container_1406114813957_0002_01_000006, State: COMPLETE, Diagnostics: Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 , ExitStatus: 143, ], ContainerStatus: [ContainerId: container_1406114813957_0002_01_000007, State: COMPLETE, Diagnostics: Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 , ExitStatus: 143, ], ContainerStatus: [ContainerId: container_1406114813957_0002_01_000008, State: COMPLETE, Diagnostics: Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 , ExitStatus: 143, ], ContainerStatus: [ContainerId: container_1406114813957_0002_01_000009, State: COMPLETE, Diagnostics: Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 , ExitStatus: 143, ]] 2014-07-23 17:01:01,034 INFO org.apache.hadoop.yarn.util.RackResolver: Resolved HOST-10-18-40-26 to /default-rack 2014-07-23 17:01:01,035 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: NodeManager from node HOST-10-18-40-26(cmPort: 45026 httpPort: 45025) registered with capability: , assigned nodeId HOST-10-18-40-26:45026 2014-07-23 17:01:01,037 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1406114813957_0002_000001 with final state: FAILED 2014-07-23 17:01:01,039 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406114813957_0002_000001 State change from LAUNCHED to FINAL_SAVING 2014-07-23 17:01:01,039 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: HOST-10-18-40-26:45026 Node Transitioned from NEW to RUNNING 2014-07-23 17:01:01,039 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added node HOST-10-18-40-26:45026 clusterResource: 2014-07-23 17:01:01,047 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed... 2014-07-23 17:01:01,047 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed... 2014-07-23 17:01:01,047 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed... 2014-07-23 17:01:01,047 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed... 2014-07-23 17:01:01,047 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed... 2014-07-23 17:01:01,047 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed... 2014-07-23 17:01:01,048 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed... 2014-07-23 17:01:01,048 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed... 2014-07-23 17:01:01,048 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed... 2014-07-23 17:01:01,063 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Watcher event type: NodeDataChanged with state:SyncConnected for path:/rmstore/ZKRMStateRoot/RMAppRoot/application_1406114813957_0002/appattempt_1406114813957_0002_000001 for Service org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore in state org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: STARTED 2014-07-23 17:01:01,064 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1406114813957_0002_000001 2014-07-23 17:01:01,070 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406114813957_0002_000001 State change from FINAL_SAVING to FAILED 2014-07-23 17:01:01,070 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application attempt appattempt_1406114813957_0002_000001 2014-07-23 17:01:01,071 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1406114813957_0002_000002 2014-07-23 17:01:01,071 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1406114813957_0002_000001 is done. finalState=FAILED 2014-07-23 17:01:01,071 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Unknown application appattempt_1406114813957_0002_000001 has completed! 2014-07-23 17:01:01,074 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406114813957_0002_000002 State change from NEW to SUBMITTED 2014-07-23 17:01:01,076 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1406114813957_0002 from user: testos activated in queue: b 2014-07-23 17:01:01,076 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1406114813957_0002 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@2cc24ae7, leaf-queue: b #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1 2014-07-23 17:01:01,076 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1406114813957_0002_000002 to scheduler from user testos in queue b 2014-07-23 17:01:01,077 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406114813957_0002_000002 State change from SUBMITTED to SCHEDULED 2014-07-23 17:01:01,500 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of container container_1406114813957_0002_02_000001 2014-07-23 17:01:01,501 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1406114813957_0002_02_000001 Container Transitioned from NEW to ALLOCATED 2014-07-23 17:01:01,501 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode: Assigned container container_1406114813957_0002_02_000001 of capacity on host HOST-10-18-40-84:45026, which currently has 1 containers, used and available 2014-07-23 17:01:01,501 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1406114813957_0002_000002 container=Container: [ContainerId: container_1406114813957_0002_02_000001, NodeId: HOST-10-18-40-84:45026, NodeHttpAddress: HOST-10-18-40-84:45025, Resource: , Priority: 0, Token: null, ] queue=b: capacity=0.2, absoluteCapacity=0.2, usedResources=, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource= 2014-07-23 17:01:01,501 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.b stats: b: capacity=0.2, absoluteCapacity=0.2, usedResources=, usedCapacity=0.5, absoluteUsedCapacity=0.1, numApps=1, numContainers=1 2014-07-23 17:01:01,501 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.1 absoluteUsedCapacity=0.1 used= cluster= 2014-07-23 17:01:01,505 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : HOST-10-18-40-84:45026 for container : container_1406114813957_0002_02_000001 2014-07-23 17:01:01,508 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1406114813957_0002_02_000001 Container Transitioned from ALLOCATED to ACQUIRED 2014-07-23 17:01:01,508 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1406114813957_0002_000002 2014-07-23 17:01:01,510 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1406114813957_0002 AttemptId: appattempt_1406114813957_0002_000002 MasterContainer: Container: [ContainerId: container_1406114813957_0002_02_000001, NodeId: HOST-10-18-40-84:45026, NodeHttpAddress: HOST-10-18-40-84:45025, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 10.18.40.84:45026 }, ] 2014-07-23 17:01:01,510 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406114813957_0002_000002 State change from SCHEDULED to ALLOCATED_SAVING 2014-07-23 17:01:01,523 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406114813957_0002_000002 State change from ALLOCATED_SAVING to ALLOCATED 2014-07-23 17:01:01,525 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1406114813957_0002_000002 2014-07-23 17:01:01,541 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1406114813957_0002_02_000001, NodeId: HOST-10-18-40-84:45026, NodeHttpAddress: HOST-10-18-40-84:45025, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 10.18.40.84:45026 }, ] for AM appattempt_1406114813957_0002_000002 2014-07-23 17:01:01,541 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1406114813957_0002_02_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir= -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1>/stdout 2>/stderr 2014-07-23 17:01:01,584 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1406114813957_0002_02_000001, NodeId: HOST-10-18-40-84:45026, NodeHttpAddress: HOST-10-18-40-84:45025, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 10.18.40.84:45026 }, ] for AM appattempt_1406114813957_0002_000002 2014-07-23 17:01:01,584 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406114813957_0002_000002 State change from ALLOCATED to LAUNCHED 2014-07-23 17:01:02,477 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1406114813957_0002_02_000001 Container Transitioned from ACQUIRED to RUNNING 2014-07-23 17:01:03,073 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115063073, a, 0, 0, 0, 0, 16384, 12, 0, 0, 0, 0, 0, 0, b, 2048, 1, 0, 0, 4096, 3, 2048, 1, 0, 0, 0, 0 2014-07-23 17:01:04,025 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1406114813957_0002_000002 (auth:SIMPLE) 2014-07-23 17:01:04,032 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM registration appattempt_1406114813957_0002_000002 2014-07-23 17:01:04,033 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos IP=10.18.40.84 OPERATION=Register App Master TARGET=ApplicationMasterService RESULT=SUCCESS APPID=application_1406114813957_0002 APPATTEMPTID=appattempt_1406114813957_0002_000002 2014-07-23 17:01:04,042 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406114813957_0002_000002 State change from LAUNCHED to RUNNING 2014-07-23 17:01:04,042 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the start data of application attempt appattempt_1406114813957_0002_000002 2014-07-23 17:01:04,042 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406114813957_0002 State change from ACCEPTED to RUNNING 2014-07-23 17:01:05,305 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1406114813957_0002_000002 with final state: FINISHING 2014-07-23 17:01:05,305 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406114813957_0002_000002 State change from RUNNING to FINAL_SAVING 2014-07-23 17:01:05,305 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1406114813957_0002 with final state: FINISHING 2014-07-23 17:01:05,306 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406114813957_0002 State change from RUNNING to FINAL_SAVING 2014-07-23 17:01:05,330 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Watcher event type: NodeDataChanged with state:SyncConnected for path:/rmstore/ZKRMStateRoot/RMAppRoot/application_1406114813957_0002/appattempt_1406114813957_0002_000002 for Service org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore in state org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: STARTED 2014-07-23 17:01:05,330 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406114813957_0002_000002 State change from FINAL_SAVING to FINISHING 2014-07-23 17:01:05,332 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1406114813957_0002 2014-07-23 17:01:05,374 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Watcher event type: NodeDataChanged with state:SyncConnected for path:/rmstore/ZKRMStateRoot/RMAppRoot/application_1406114813957_0002 for Service org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore in state org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: STARTED 2014-07-23 17:01:05,375 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406114813957_0002 State change from FINAL_SAVING to FINISHING 2014-07-23 17:01:06,074 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115066074, a, 0, 0, 0, 0, 16384, 12, 0, 0, 0, 0, 0, 0, b, 2048, 1, 0, 0, 4096, 3, 2048, 1, 0, 0, 0, 0 2014-07-23 17:01:07,486 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1406114813957_0002_000002 2014-07-23 17:01:07,486 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application attempt appattempt_1406114813957_0002_000002 2014-07-23 17:01:07,486 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1406114813957_0002_000002 State change from FINISHING to FINISHED 2014-07-23 17:01:07,487 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1406114813957_0002 State change from FINISHING to FINISHED 2014-07-23 17:01:07,487 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of application application_1406114813957_0002 2014-07-23 17:01:07,487 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testos OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1406114813957_0002 2014-07-23 17:01:07,487 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1406114813957_0002,name=word count,user=testos,queue=b,state=FINISHED,trackingUrl=http://10.18.40.95:45029/proxy/application_1406114813957_0002/jobhistory/job/job_1406114813957_0002,appMasterHost=,startTime=1406114984577,finishTime=1406115065305,finalStatus=SUCCEEDED 2014-07-23 17:01:07,487 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1406114813957_0002_000002 2014-07-23 17:01:07,497 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1406114813957_0002_02_000001 Container Transitioned from RUNNING to COMPLETED 2014-07-23 17:01:07,497 ERROR org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter: Error when storing the finish data of container container_1406114813957_0002_02_000001 2014-07-23 17:01:07,497 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1406114813957_0002_02_000001 in state: COMPLETED event:FINISHED 2014-07-23 17:01:07,497 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode: Released container container_1406114813957_0002_02_000001 of capacity on host HOST-10-18-40-84:45026, which currently has 0 containers, used and available, release resources=true 2014-07-23 17:01:07,497 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: b used= numContainers=0 user=testos user-resources= 2014-07-23 17:01:07,498 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1406114813957_0002_02_000001, NodeId: HOST-10-18-40-84:45026, NodeHttpAddress: HOST-10-18-40-84:45025, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 10.18.40.84:45026 }, ] queue=b: capacity=0.2, absoluteCapacity=0.2, usedResources=, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster= 2014-07-23 17:01:07,498 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used= cluster= 2014-07-23 17:01:07,498 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.b stats: b: capacity=0.2, absoluteCapacity=0.2, usedResources=, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 2014-07-23 17:01:07,498 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1406114813957_0002_000002 released container container_1406114813957_0002_02_000001 on node: host: HOST-10-18-40-84:45026 #containers=0 available=10240 used=0 with event: FINISHED 2014-07-23 17:01:07,498 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1406114813957_0002_000002 is done. finalState=FINISHED 2014-07-23 17:01:07,498 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1406114813957_0002 requests cleared 2014-07-23 17:01:07,499 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1406114813957_0002 user: testos queue: b #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0 2014-07-23 17:01:07,499 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1406114813957_0002 user: testos leaf-queue of parent: root #applications: 0 2014-07-23 17:01:09,075 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115069075, a, 0, 0, 0, 0, 16384, 12, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 4096, 3, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:11,971 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: Node not found resyncing HOST-10-18-40-95:45026 2014-07-23 17:01:12,077 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115072077, a, 0, 0, 0, 0, 16384, 12, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 4096, 3, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:12,975 INFO org.apache.hadoop.yarn.util.RackResolver: Resolved HOST-10-18-40-95 to /default-rack 2014-07-23 17:01:12,975 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: NodeManager from node HOST-10-18-40-95(cmPort: 45026 httpPort: 45025) registered with capability: , assigned nodeId HOST-10-18-40-95:45026 2014-07-23 17:01:12,975 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: HOST-10-18-40-95:45026 Node Transitioned from NEW to RUNNING 2014-07-23 17:01:12,976 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added node HOST-10-18-40-95:45026 clusterResource: 2014-07-23 17:01:15,078 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115075078, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:18,079 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115078079, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:21,079 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115081079, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:24,080 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115084080, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:27,081 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115087081, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:30,082 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115090082, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:33,082 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115093082, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:36,083 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115096083, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:39,084 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115099084, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:42,085 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115102085, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:45,085 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115105085, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:48,086 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115108086, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:51,087 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115111087, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:54,087 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115114087, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:01:57,088 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115117088, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:02:00,089 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115120089, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:02:03,089 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115123089, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:02:06,090 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115126090, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:02:09,091 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115129091, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:02:12,092 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115132092, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:02:15,092 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115135092, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:02:18,093 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115138093, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:02:21,094 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115141094, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0 2014-07-23 17:02:24,094 INFO org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy: QUEUESTATE: 1406115144094, a, 0, 0, 0, 0, 24576, 19, 0, 0, 0, 0, 0, 0, b, 0, 0, 0, 0, 6144, 4, 0, 0, 0, 0, 0, 0