2016-08-15 14:50:06,695 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(168): Deleting the snapshot snapshot_1471297764531_ns1_test-1471297750223 for backup backup_1471297762157 succeeded. 2016-08-15 14:50:06,696 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(462): Backup backup_1471297762157 completed. 2016-08-15 14:50:06,805 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(328): Released /1/table-lock/hbase:backup/write-master:557550000000001 2016-08-15 14:50:06,806 DEBUG [ProcedureExecutor-4] procedure2.ProcedureExecutor(870): Procedure completed in 44.5110sec: FullTableBackupProcedure (targetRootDir=hdfs://localhost:55740/backupUT; backupId=backup_1471297762157; tables=ns1:test-1471297750223,ns2:test-14712977502231,ns3:test-14712977502232,ns4:test-14712977502233) id=13 state=FINISHED 2016-08-15 14:50:07,639 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@75965af8] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:55741 to delete [blk_1073741857_1033, blk_1073741860_1036, blk_1073741861_1037, blk_1073741864_1040, blk_1073741865_1041, blk_1073741867_1043, blk_1073741868_1044, blk_1073741870_1046] 2016-08-15 14:50:10,574 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-15 14:50:10,575 DEBUG [main] impl.BackupSystemTable(157): read backup status from hbase:backup for: backup_1471297762157 2016-08-15 14:50:10,580 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:10,580 DEBUG [RpcServer.listener,port=55757] ipc.RpcServer$Listener(880): RpcServer.listener,port=55757: connection from 10.22.9.171:56091; # active connections: 4 2016-08-15 14:50:10,581 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:10,581 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56091 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:10,583 DEBUG [main] backup.TestIncrementalBackup(64): writing 199 rows to ns1:test-1471297750223 2016-08-15 14:50:10,591 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:10,591 DEBUG [RpcServer.listener,port=55757] ipc.RpcServer$Listener(880): RpcServer.listener,port=55757: connection from 10.22.9.171:56092; # active connections: 5 2016-08-15 14:50:10,592 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:10,592 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56092 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:10,593 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,596 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,598 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,599 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,601 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,603 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,604 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,606 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,607 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,609 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,610 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,612 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,614 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,616 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,617 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,619 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,621 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,622 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,624 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,626 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,628 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,629 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,631 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,632 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,634 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,635 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,637 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,639 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,640 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,642 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,644 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,645 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,647 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,648 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,650 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,652 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,653 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,655 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,656 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,658 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,659 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,661 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,663 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,875 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,877 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,878 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,879 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,881 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,882 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:10,919 DEBUG [main] backup.TestIncrementalBackup(75): written 199 rows to ns1:test-1471297750223 2016-08-15 14:50:10,923 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297762961 2016-08-15 14:50:10,926 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297762961 2016-08-15 14:50:10,928 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297762961 2016-08-15 14:50:10,930 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297762961 2016-08-15 14:50:10,932 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297762961 2016-08-15 14:50:10,949 DEBUG [main] backup.TestIncrementalBackup(87): written 199 rows to ns2:test-14712977502231 2016-08-15 14:50:10,950 INFO [main] util.BackupClientUtil(105): Using existing backup root dir: hdfs://localhost:55740/backupUT 2016-08-15 14:50:10,954 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] impl.BackupSystemTable(431): get incr backup table set from hbase:backup 2016-08-15 14:50:10,955 INFO [B.defaultRpcServer.handler=3,queue=0,port=55755] master.HMaster(2641): Incremental backup for the following table set: [ns1:test-1471297750223, ns2:test-14712977502231, ns3:test-14712977502232, ns4:test-14712977502233] 2016-08-15 14:50:10,961 INFO [B.defaultRpcServer.handler=3,queue=0,port=55755] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1b0e93c3 connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:10,966 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x1b0e93c30x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:10,967 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44e7b69c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:50:10,967 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:50:10,967 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:50:10,967 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] backup.BackupInfo(125): CreateBackupContext: 4 ns1:test-1471297750223 2016-08-15 14:50:10,968 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x1b0e93c3-0x156902d8a140010 connected 2016-08-15 14:50:11,078 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] procedure2.ProcedureExecutor(669): Procedure IncrementalTableBackupProcedure (targetRootDir=hdfs://localhost:55740/backupUT; backupId=backup_1471297810954; tables=ns1:test-1471297750223,ns2:test-14712977502231,ns3:test-14712977502232,ns4:test-14712977502233) id=14 state=RUNNABLE:PREPARE_INCREMENTAL added to the store. 2016-08-15 14:50:11,081 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-15 14:50:11,081 DEBUG [ProcedureExecutor-5] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/hbase:backup/write-master:557550000000002 2016-08-15 14:50:11,082 INFO [ProcedureExecutor-5] master.FullTableBackupProcedure(130): Backup backup_1471297810954 started at 1471297811082. 2016-08-15 14:50:11,082 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1471297810954 set status=RUNNING 2016-08-15 14:50:11,086 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:11,086 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56097; # active connections: 8 2016-08-15 14:50:11,087 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:11,087 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56097 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:11,091 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:11,091 DEBUG [RpcServer.listener,port=55757] ipc.RpcServer$Listener(880): RpcServer.listener,port=55757: connection from 10.22.9.171:56098; # active connections: 6 2016-08-15 14:50:11,091 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:11,092 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56098 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:11,092 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297763800 2016-08-15 14:50:11,093 DEBUG [ProcedureExecutor-5] master.FullTableBackupProcedure(134): Backup session backup_1471297810954 has been started. 2016-08-15 14:50:11,093 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(431): get incr backup table set from hbase:backup 2016-08-15 14:50:11,095 DEBUG [ProcedureExecutor-5] master.IncrementalTableBackupProcedure(216): For incremental backup, current table set is [ns1:test-1471297750223, ns2:test-14712977502231, ns3:test-14712977502232, ns4:test-14712977502233] 2016-08-15 14:50:11,096 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(180): read backup start code from hbase:backup 2016-08-15 14:50:11,097 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(365): read RS log ts from hbase:backup for root=hdfs://localhost:55740/backupUT 2016-08-15 14:50:11,100 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(93): StartCode 1471297729233for backupID backup_1471297810954 2016-08-15 14:50:11,100 INFO [ProcedureExecutor-5] impl.IncrementalBackupManager(104): Execute roll log procedure for incremental backup ... 2016-08-15 14:50:11,105 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:50:11,105 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56099; # active connections: 9 2016-08-15 14:50:11,106 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:11,107 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56099 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:11,109 INFO [B.defaultRpcServer.handler=2,queue=0,port=55755] master.MasterRpcServices(652): Client=tyu//10.22.9.171 procedure request for: rolllog-proc 2016-08-15 14:50:11,109 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] procedure.ProcedureCoordinator(177): Submitting procedure rolllog 2016-08-15 14:50:11,110 INFO [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] procedure.Procedure(196): Starting procedure 'rolllog' 2016-08-15 14:50:11,110 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-15 14:50:11,110 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] procedure.Procedure(204): Procedure 'rolllog' starting 'acquire' 2016-08-15 14:50:11,110 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] procedure.Procedure(247): Starting procedure 'rolllog', kicking off acquire phase on members. 2016-08-15 14:50:11,111 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-15 14:50:11,111 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(94): Creating acquire znode:/1/rolllog-proc/acquired/rolllog 2016-08-15 14:50:11,111 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-15 14:50:11,111 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,111 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-15 14:50:11,112 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-15 14:50:11,111 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:55757-0x156902d8a140001, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-15 14:50:11,112 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-15 14:50:11,112 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/acquired/rolllog/10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,112 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,55757,1471297725443 2016-08-15 14:50:11,112 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-15 14:50:11,112 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/rolllog-proc/acquired/rolllog 2016-08-15 14:50:11,112 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/acquired/rolllog/10.22.9.171,55757,1471297725443 2016-08-15 14:50:11,112 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2016-08-15 14:50:11,112 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/rolllog-proc/acquired/rolllog 2016-08-15 14:50:11,112 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-15 14:50:11,113 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:55757-0x156902d8a140001, quorum=localhost:53145, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-15 14:50:11,113 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 35 2016-08-15 14:50:11,113 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/rolllog-proc/acquired/rolllog 2016-08-15 14:50:11,113 INFO [main-EventThread] regionserver.LogRollRegionServerProcedureManager(117): Attempting to run a roll log procedure for backup. 2016-08-15 14:50:11,113 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 35 2016-08-15 14:50:11,113 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/rolllog-proc/acquired/rolllog 2016-08-15 14:50:11,113 INFO [main-EventThread] regionserver.LogRollRegionServerProcedureManager(117): Attempting to run a roll log procedure for backup. 2016-08-15 14:50:11,113 INFO [main-EventThread] regionserver.LogRollBackupSubprocedure(55): Constructing a LogRollBackupSubprocedure. 2016-08-15 14:50:11,113 INFO [main-EventThread] regionserver.LogRollBackupSubprocedure(55): Constructing a LogRollBackupSubprocedure. 2016-08-15 14:50:11,113 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:rolllog 2016-08-15 14:50:11,113 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:rolllog 2016-08-15 14:50:11,114 DEBUG [member: '10.22.9.171,55757,1471297725443' subprocedure-pool1-thread-1] procedure.Subprocedure(157): Starting subprocedure 'rolllog' with timeout 60000ms 2016-08-15 14:50:11,114 DEBUG [member: '10.22.9.171,55757,1471297725443' subprocedure-pool1-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-15 14:50:11,114 DEBUG [member: '10.22.9.171,55755,1471297724766' subprocedure-pool2-thread-1] procedure.Subprocedure(157): Starting subprocedure 'rolllog' with timeout 60000ms 2016-08-15 14:50:11,114 DEBUG [member: '10.22.9.171,55755,1471297724766' subprocedure-pool2-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-15 14:50:11,114 DEBUG [member: '10.22.9.171,55757,1471297725443' subprocedure-pool1-thread-1] procedure.Subprocedure(165): Subprocedure 'rolllog' starting 'acquire' stage 2016-08-15 14:50:11,115 DEBUG [member: '10.22.9.171,55757,1471297725443' subprocedure-pool1-thread-1] procedure.Subprocedure(167): Subprocedure 'rolllog' locally acquired 2016-08-15 14:50:11,115 DEBUG [member: '10.22.9.171,55757,1471297725443' subprocedure-pool1-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,55757,1471297725443' joining acquired barrier for procedure (rolllog) in zk 2016-08-15 14:50:11,115 DEBUG [member: '10.22.9.171,55755,1471297724766' subprocedure-pool2-thread-1] procedure.Subprocedure(165): Subprocedure 'rolllog' starting 'acquire' stage 2016-08-15 14:50:11,115 DEBUG [member: '10.22.9.171,55755,1471297724766' subprocedure-pool2-thread-1] procedure.Subprocedure(167): Subprocedure 'rolllog' locally acquired 2016-08-15 14:50:11,115 DEBUG [member: '10.22.9.171,55755,1471297724766' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,55755,1471297724766' joining acquired barrier for procedure (rolllog) in zk 2016-08-15 14:50:11,115 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,55757,1471297725443 2016-08-15 14:50:11,115 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/acquired/rolllog/10.22.9.171,55757,1471297725443 2016-08-15 14:50:11,116 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,55757,1471297725443 2016-08-15 14:50:11,116 DEBUG [member: '10.22.9.171,55755,1471297724766' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/rolllog-proc/reached/rolllog 2016-08-15 14:50:11,115 DEBUG [member: '10.22.9.171,55757,1471297725443' subprocedure-pool1-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/rolllog-proc/reached/rolllog 2016-08-15 14:50:11,116 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/acquired/rolllog/10.22.9.171,55757,1471297725443 2016-08-15 14:50:11,116 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-15 14:50:11,116 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-15 14:50:11,116 DEBUG [member: '10.22.9.171,55755,1471297724766' subprocedure-pool2-thread-1] zookeeper.ZKUtil(367): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog 2016-08-15 14:50:11,116 DEBUG [member: '10.22.9.171,55755,1471297724766' subprocedure-pool2-thread-1] procedure.Subprocedure(172): Subprocedure 'rolllog' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-15 14:50:11,116 DEBUG [member: '10.22.9.171,55757,1471297725443' subprocedure-pool1-thread-1] zookeeper.ZKUtil(367): regionserver:55757-0x156902d8a140001, quorum=localhost:53145, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog 2016-08-15 14:50:11,116 DEBUG [member: '10.22.9.171,55757,1471297725443' subprocedure-pool1-thread-1] procedure.Subprocedure(172): Subprocedure 'rolllog' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-15 14:50:11,116 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-15 14:50:11,116 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-15 14:50:11,117 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,117 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,55757,1471297725443 2016-08-15 14:50:11,117 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-15 14:50:11,117 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-15 14:50:11,118 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.9.171,55757,1471297725443' joining acquired barrier for procedure 'rolllog' on coordinator 2016-08-15 14:50:11,118 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@51c1a6fc[Count = 1] remaining members to acquire global barrier 2016-08-15 14:50:11,118 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,118 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/acquired/rolllog/10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,118 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,118 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/acquired/rolllog/10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,118 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-15 14:50:11,118 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-15 14:50:11,118 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-15 14:50:11,118 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-15 14:50:11,118 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,119 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,55757,1471297725443 2016-08-15 14:50:11,119 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-15 14:50:11,119 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-15 14:50:11,119 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.9.171,55755,1471297724766' joining acquired barrier for procedure 'rolllog' on coordinator 2016-08-15 14:50:11,119 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@51c1a6fc[Count = 0] remaining members to acquire global barrier 2016-08-15 14:50:11,119 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] procedure.Procedure(212): Procedure 'rolllog' starting 'in-barrier' execution. 2016-08-15 14:50:11,120 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(118): Creating reached barrier zk node:/1/rolllog-proc/reached/rolllog 2016-08-15 14:50:11,120 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:55757-0x156902d8a140001, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-15 14:50:11,120 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-15 14:50:11,120 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog 2016-08-15 14:50:11,120 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/rolllog-proc/reached/rolllog 2016-08-15 14:50:11,120 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog 2016-08-15 14:50:11,120 DEBUG [member: '10.22.9.171,55757,1471297725443' subprocedure-pool1-thread-1] procedure.Subprocedure(186): Subprocedure 'rolllog' received 'reached' from coordinator. 2016-08-15 14:50:11,120 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/rolllog-proc/reached/rolllog 2016-08-15 14:50:11,120 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog/10.22.9.171,55757,1471297725443 2016-08-15 14:50:11,121 DEBUG [member: '10.22.9.171,55757,1471297725443' subprocedure-pool1-thread-1] regionserver.LogRollBackupSubprocedurePool(84): Waiting for backup procedure to finish. 2016-08-15 14:50:11,121 DEBUG [member: '10.22.9.171,55755,1471297724766' subprocedure-pool2-thread-1] procedure.Subprocedure(186): Subprocedure 'rolllog' received 'reached' from coordinator. 2016-08-15 14:50:11,120 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog 2016-08-15 14:50:11,121 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-15 14:50:11,121 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-15 14:50:11,121 DEBUG [rs(10.22.9.171,55757,1471297725443)-backup-pool30-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(74): ++ DRPC started: 10.22.9.171,55757,1471297725443 2016-08-15 14:50:11,121 DEBUG [rs(10.22.9.171,55755,1471297724766)-backup-pool29-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(74): ++ DRPC started: 10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,121 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(367): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog/10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,121 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2016-08-15 14:50:11,121 DEBUG [member: '10.22.9.171,55755,1471297724766' subprocedure-pool2-thread-1] regionserver.LogRollBackupSubprocedurePool(84): Waiting for backup procedure to finish. 2016-08-15 14:50:11,121 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-15 14:50:11,121 INFO [rs(10.22.9.171,55755,1471297724766)-backup-pool29-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(79): Trying to roll log in backup subprocedure, current log number: 1471297762531 on 10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,121 INFO [rs(10.22.9.171,55757,1471297725443)-backup-pool30-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(79): Trying to roll log in backup subprocedure, current log number: 1471297762531 on 10.22.9.171,55757,1471297725443 2016-08-15 14:50:11,122 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-15 14:50:11,122 DEBUG [master//10.22.9.171:0.logRoller] regionserver.LogRoller(135): WAL roll requested 2016-08-15 14:50:11,122 DEBUG [regionserver//10.22.9.171:0.logRoller] regionserver.LogRoller(135): WAL roll requested 2016-08-15 14:50:11,122 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,122 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,55757,1471297725443 2016-08-15 14:50:11,123 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-15 14:50:11,124 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-15 14:50:11,125 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-15 14:50:11,125 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(238): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog 2016-08-15 14:50:11,125 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297811122 2016-08-15 14:50:11,126 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297811122 2016-08-15 14:50:11,129 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531 2016-08-15 14:50:11,129 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531 2016-08-15 14:50:11,130 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531 2016-08-15 14:50:11,130 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531 2016-08-15 14:50:11,134 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741851_1027{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 91 2016-08-15 14:50:11,134 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741852_1028{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 91 2016-08-15 14:50:11,187 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-15 14:50:11,391 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-15 14:50:11,539 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531 with entries=0, filesize=91 B; new WAL /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297811122 2016-08-15 14:50:11,539 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531 with entries=0, filesize=91 B; new WAL /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297811122 2016-08-15 14:50:11,540 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531 to hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531 2016-08-15 14:50:11,540 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531 to hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531 2016-08-15 14:50:11,544 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297811542 2016-08-15 14:50:11,545 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297811543 2016-08-15 14:50:11,550 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297762961 2016-08-15 14:50:11,550 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297762961 2016-08-15 14:50:11,551 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297762961 2016-08-15 14:50:11,551 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297762961 2016-08-15 14:50:11,554 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741854_1030{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 91 2016-08-15 14:50:11,555 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741853_1029{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 1205 2016-08-15 14:50:11,698 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-15 14:50:11,961 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297762961 with entries=7, filesize=1.18 KB; new WAL /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297811542 2016-08-15 14:50:11,961 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297762961 with entries=0, filesize=91 B; new WAL /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297811543 2016-08-15 14:50:11,962 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297757231 to hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297757231 2016-08-15 14:50:11,962 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297762961 to hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297762961 2016-08-15 14:50:11,966 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297811964 2016-08-15 14:50:11,971 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:11,972 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:11,976 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741855_1031{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 22916 2016-08-15 14:50:11,980 DEBUG [rs(10.22.9.171,55755,1471297724766)-backup-pool29-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(86): log roll took 858 2016-08-15 14:50:11,980 INFO [rs(10.22.9.171,55755,1471297724766)-backup-pool29-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(87): After roll log in backup subprocedure, current log number: 1471297811122 on 10.22.9.171,55755,1471297724766 1471297762531 2016-08-15 14:50:11,980 DEBUG [rs(10.22.9.171,55755,1471297724766)-backup-pool29-thread-1] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-15 14:50:11,984 DEBUG [rs(10.22.9.171,55755,1471297724766)-backup-pool29-thread-1] impl.BackupSystemTable(254): write region server last roll log result to hbase:backup 2016-08-15 14:50:11,985 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297763800 2016-08-15 14:50:11,986 DEBUG [member: '10.22.9.171,55755,1471297724766' subprocedure-pool2-thread-1] procedure.Subprocedure(188): Subprocedure 'rolllog' locally completed 2016-08-15 14:50:11,986 DEBUG [member: '10.22.9.171,55755,1471297724766' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'rolllog' completed for member '10.22.9.171,55755,1471297724766' in zk 2016-08-15 14:50:11,987 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,987 DEBUG [member: '10.22.9.171,55755,1471297724766' subprocedure-pool2-thread-1] procedure.Subprocedure(193): Subprocedure 'rolllog' has notified controller of completion 2016-08-15 14:50:11,988 DEBUG [member: '10.22.9.171,55755,1471297724766' subprocedure-pool2-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-15 14:50:11,988 DEBUG [member: '10.22.9.171,55755,1471297724766' subprocedure-pool2-thread-1] procedure.Subprocedure(218): Subprocedure 'rolllog' completed. 2016-08-15 14:50:11,987 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog/10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,988 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog/10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,988 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog/10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,988 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-15 14:50:11,988 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-15 14:50:11,989 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-15 14:50:11,989 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-15 14:50:11,989 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,990 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,55757,1471297725443 2016-08-15 14:50:11,990 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-15 14:50:11,990 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-15 14:50:11,990 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-15 14:50:11,990 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,55755,1471297724766 2016-08-15 14:50:11,991 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'rolllog' member '10.22.9.171,55755,1471297724766': 2016-08-15 14:50:11,991 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.9.171,55755,1471297724766' released barrier for procedure'rolllog', counting down latch. Waiting for 1 more 2016-08-15 14:50:12,204 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-15 14:50:12,383 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 with entries=201, filesize=22.38 KB; new WAL /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297811964 2016-08-15 14:50:12,384 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297754595 to hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297754595 2016-08-15 14:50:12,389 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297812387 2016-08-15 14:50:12,394 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297763800 2016-08-15 14:50:12,395 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297763800 2016-08-15 14:50:12,399 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741856_1032{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 4383 2016-08-15 14:50:12,805 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297763800 with entries=8, filesize=4.28 KB; new WAL /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297812387 2016-08-15 14:50:12,813 DEBUG [rs(10.22.9.171,55757,1471297725443)-backup-pool30-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(86): log roll took 1691 2016-08-15 14:50:12,813 INFO [rs(10.22.9.171,55757,1471297725443)-backup-pool30-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(87): After roll log in backup subprocedure, current log number: 1471297811122 on 10.22.9.171,55757,1471297725443 2016-08-15 14:50:12,813 DEBUG [rs(10.22.9.171,55757,1471297725443)-backup-pool30-thread-1] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-15 14:50:12,816 DEBUG [rs(10.22.9.171,55757,1471297725443)-backup-pool30-thread-1] impl.BackupSystemTable(254): write region server last roll log result to hbase:backup 2016-08-15 14:50:12,819 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297812387 2016-08-15 14:50:12,820 DEBUG [member: '10.22.9.171,55757,1471297725443' subprocedure-pool1-thread-1] procedure.Subprocedure(188): Subprocedure 'rolllog' locally completed 2016-08-15 14:50:12,820 DEBUG [member: '10.22.9.171,55757,1471297725443' subprocedure-pool1-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'rolllog' completed for member '10.22.9.171,55757,1471297725443' in zk 2016-08-15 14:50:12,823 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,55757,1471297725443 2016-08-15 14:50:12,823 DEBUG [member: '10.22.9.171,55757,1471297725443' subprocedure-pool1-thread-1] procedure.Subprocedure(193): Subprocedure 'rolllog' has notified controller of completion 2016-08-15 14:50:12,823 DEBUG [member: '10.22.9.171,55757,1471297725443' subprocedure-pool1-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-15 14:50:12,823 DEBUG [member: '10.22.9.171,55757,1471297725443' subprocedure-pool1-thread-1] procedure.Subprocedure(218): Subprocedure 'rolllog' completed. 2016-08-15 14:50:12,823 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog/10.22.9.171,55757,1471297725443 2016-08-15 14:50:12,824 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog/10.22.9.171,55757,1471297725443 2016-08-15 14:50:12,824 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog/10.22.9.171,55757,1471297725443 2016-08-15 14:50:12,824 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-15 14:50:12,824 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-15 14:50:12,825 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-15 14:50:12,825 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-15 14:50:12,825 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,55755,1471297724766 2016-08-15 14:50:12,826 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,55757,1471297725443 2016-08-15 14:50:12,826 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-15 14:50:12,826 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-15 14:50:12,827 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-15 14:50:12,827 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,55755,1471297724766 2016-08-15 14:50:12,827 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,55757,1471297725443 2016-08-15 14:50:12,828 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'rolllog' member '10.22.9.171,55757,1471297725443': 2016-08-15 14:50:12,828 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.9.171,55757,1471297725443' released barrier for procedure'rolllog', counting down latch. Waiting for 0 more 2016-08-15 14:50:12,828 INFO [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] procedure.Procedure(221): Procedure 'rolllog' execution completed 2016-08-15 14:50:12,828 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] procedure.Procedure(230): Running finish phase. 2016-08-15 14:50:12,828 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2016-08-15 14:50:12,828 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureCoordinatorRpcs(165): Attempting to clean out zk node for op:rolllog 2016-08-15 14:50:12,828 INFO [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] procedure.ZKProcedureUtil(285): Clearing all znodes for procedure rolllogincluding nodes /1/rolllog-proc/acquired /1/rolllog-proc/reached /1/rolllog-proc/abort 2016-08-15 14:50:12,829 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:55757-0x156902d8a140001, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-15 14:50:12,829 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-15 14:50:12,829 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/abort/rolllog 2016-08-15 14:50:12,829 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/abort/rolllog 2016-08-15 14:50:12,829 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-15 14:50:12,829 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-15 14:50:12,829 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:55757-0x156902d8a140001, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-15 14:50:12,829 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/acquired/rolllog/10.22.9.171,55755,1471297724766 2016-08-15 14:50:12,829 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/abort/rolllog 2016-08-15 14:50:12,830 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-15 14:50:12,830 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-15 14:50:12,829 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-15 14:50:12,830 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-15 14:50:12,830 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/acquired/rolllog/10.22.9.171,55757,1471297725443 2016-08-15 14:50:12,830 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-15 14:50:12,830 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-15 14:50:12,830 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-15 14:50:12,830 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,55755,1471297724766 2016-08-15 14:50:12,831 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,55757,1471297725443 2016-08-15 14:50:12,831 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-15 14:50:12,831 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/reached/rolllog/10.22.9.171,55755,1471297724766 2016-08-15 14:50:12,831 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-15 14:50:12,832 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] zookeeper.ZKUtil(365): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/reached/rolllog/10.22.9.171,55757,1471297725443 2016-08-15 14:50:12,832 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-15 14:50:12,832 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-15 14:50:12,832 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,55755,1471297724766 2016-08-15 14:50:12,833 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,55757,1471297725443 2016-08-15 14:50:12,833 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:55757-0x156902d8a140001, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-15 14:50:12,833 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-15 14:50:12,833 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-15 14:50:12,834 DEBUG [(10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-15 14:50:12,834 DEBUG [main-EventThread] zookeeper.ZKUtil(624): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Unable to get data of znode /1/rolllog-proc/abort/rolllog because node does not exist (not an error) 2016-08-15 14:50:12,834 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-15 14:50:12,834 INFO [B.defaultRpcServer.handler=2,queue=0,port=55755] master.LogRollMasterProcedureManager(116): Done waiting - exec procedure for rolllog 2016-08-15 14:50:12,834 INFO [B.defaultRpcServer.handler=2,queue=0,port=55755] master.LogRollMasterProcedureManager(117): Distributed roll log procedure is successful! 2016-08-15 14:50:12,834 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-15 14:50:12,834 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-15 14:50:12,834 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:55757-0x156902d8a140001, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-15 14:50:12,835 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-15 14:50:12,835 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-15 14:50:12,835 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,55757,1471297725443 2016-08-15 14:50:12,835 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2016-08-15 14:50:12,835 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,55755,1471297724766 2016-08-15 14:50:12,835 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2016-08-15 14:50:12,835 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-15 14:50:12,835 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-15 14:50:12,835 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-15 14:50:12,835 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,55757,1471297725443 2016-08-15 14:50:12,836 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-15 14:50:12,836 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,55755,1471297724766 2016-08-15 14:50:12,836 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-15 14:50:12,836 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-15 14:50:12,836 DEBUG [ProcedureExecutor-5] client.HBaseAdmin(2481): Waiting a max of 300000 ms for procedure 'rolllog-proc : rolllog'' to complete. (max 857 ms per retry) 2016-08-15 14:50:12,836 DEBUG [ProcedureExecutor-5] client.HBaseAdmin(2490): (#1) Sleeping: 100ms while waiting for procedure completion. 2016-08-15 14:50:12,937 DEBUG [ProcedureExecutor-5] client.HBaseAdmin(2496): Getting current status of procedure from master... 2016-08-15 14:50:12,945 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.MasterRpcServices(904): Checking to see if procedure from request:rolllog-proc is done 2016-08-15 14:50:12,947 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-15 14:50:12,950 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(215): In getLogFilesForNewBackup() olderTimestamps: {10.22.9.171:55755=1471297729233, 10.22.9.171:55757=1471297729233} newestTimestamps: {10.22.9.171:55755=1471297762531, 10.22.9.171:55757=1471297762531} 2016-08-15 14:50:12,953 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297811122 2016-08-15 14:50:12,953 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297731200 2016-08-15 14:50:12,953 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(276): not excluding hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297731200 1471297731200 <= 1471297762531 2016-08-15 14:50:12,953 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297811543 2016-08-15 14:50:12,953 WARN [ProcedureExecutor-5] wal.DefaultWALProvider(349): Cannot parse a server name from path=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta; Not a host:port pair: 10.22.9.171,55755,1471297724766.meta 2016-08-15 14:50:12,953 WARN [ProcedureExecutor-5] util.BackupServerUtil(237): Skip log file (can't parse): hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta 2016-08-15 14:50:12,955 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297811122 2016-08-15 14:50:12,955 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297733730 2016-08-15 14:50:12,955 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(276): not excluding hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297733730 1471297733730 <= 1471297762531 2016-08-15 14:50:12,955 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297763800 2016-08-15 14:50:12,955 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297812387 2016-08-15 14:50:12,955 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297763379 2016-08-15 14:50:12,955 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297811964 2016-08-15 14:50:12,955 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297762961 2016-08-15 14:50:12,955 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297811542 2016-08-15 14:50:12,956 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(316): excluding old wal hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297729233 1471297729233 <= 1471297729233 2016-08-15 14:50:12,957 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(325): newest log hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297762961 2016-08-15 14:50:12,957 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(316): excluding old wal hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297729233 1471297729233 <= 1471297729233 2016-08-15 14:50:12,957 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(500): get WAL files from hbase:backup 2016-08-15 14:50:12,962 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:55740/backupUT/backup_1471297762157/hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297729233 2016-08-15 14:50:12,962 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:55740/backupUT/backup_1471297762157/hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297729233 2016-08-15 14:50:12,962 DEBUG [ProcedureExecutor-5] backup.BackupInfo(313): setting incr backup file list 2016-08-15 14:50:12,963 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297731200 2016-08-15 14:50:12,963 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297733730 2016-08-15 14:50:12,963 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531 2016-08-15 14:50:12,963 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531 2016-08-15 14:50:12,963 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297754595 2016-08-15 14:50:12,963 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297757231 2016-08-15 14:50:13,072 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x47e1601f connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:13,077 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x47e1601f0x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:13,078 DEBUG [ProcedureExecutor-5] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@645607dd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:50:13,078 DEBUG [ProcedureExecutor-5] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:50:13,078 DEBUG [ProcedureExecutor-5] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:50:13,078 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x47e1601f-0x156902d8a140011 connected 2016-08-15 14:50:13,081 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:13,081 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56107; # active connections: 10 2016-08-15 14:50:13,082 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:13,082 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56107 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:13,083 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(175): Attempting to copy table info for:ns2:test-14712977502231 2016-08-15 14:50:13,095 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741891_1067{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 295 2016-08-15 14:50:13,209 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-15 14:50:13,501 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:55740/backupUT/backup_1471297810954/ns2/test-14712977502231/.tabledesc/.tableinfo.0000000001 2016-08-15 14:50:13,501 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-15 14:50:13,503 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x47e1601f connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:13,508 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x47e1601f0x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:13,510 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(188): Starting to write region info for table ns2:test-14712977502231 2016-08-15 14:50:13,510 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x47e1601f-0x156902d8a140012 connected 2016-08-15 14:50:13,517 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741892_1068{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 50 2016-08-15 14:50:13,924 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(197): Finished writing region info for table ns2:test-14712977502231 2016-08-15 14:50:13,927 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(175): Attempting to copy table info for:ns3:test-14712977502232 2016-08-15 14:50:13,940 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741893_1069{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 296 2016-08-15 14:50:14,349 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:55740/backupUT/backup_1471297810954/ns3/test-14712977502232/.tabledesc/.tableinfo.0000000001 2016-08-15 14:50:14,350 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-15 14:50:14,350 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x47e1601f connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:14,354 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x47e1601f0x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:14,360 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(188): Starting to write region info for table ns3:test-14712977502232 2016-08-15 14:50:14,360 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x47e1601f-0x156902d8a140013 connected 2016-08-15 14:50:14,367 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741894_1070{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 50 2016-08-15 14:50:14,774 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(197): Finished writing region info for table ns3:test-14712977502232 2016-08-15 14:50:14,776 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(175): Attempting to copy table info for:ns4:test-14712977502233 2016-08-15 14:50:14,790 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741895_1071{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 296 2016-08-15 14:50:15,194 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:55740/backupUT/backup_1471297810954/ns4/test-14712977502233/.tabledesc/.tableinfo.0000000001 2016-08-15 14:50:15,195 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-15 14:50:15,195 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x47e1601f connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:15,199 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x47e1601f0x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:15,202 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(188): Starting to write region info for table ns4:test-14712977502233 2016-08-15 14:50:15,202 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x47e1601f-0x156902d8a140014 connected 2016-08-15 14:50:15,209 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741896_1072{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 50 2016-08-15 14:50:15,213 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-15 14:50:15,615 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(197): Finished writing region info for table ns4:test-14712977502233 2016-08-15 14:50:15,618 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(175): Attempting to copy table info for:ns1:test-1471297750223 2016-08-15 14:50:15,634 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741897_1073{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 294 2016-08-15 14:50:16,044 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:55740/backupUT/backup_1471297810954/ns1/test-1471297750223/.tabledesc/.tableinfo.0000000001 2016-08-15 14:50:16,045 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-15 14:50:16,045 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x47e1601f connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:16,049 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x47e1601f0x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:16,051 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(188): Starting to write region info for table ns1:test-1471297750223 2016-08-15 14:50:16,051 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x47e1601f-0x156902d8a140015 connected 2016-08-15 14:50:16,060 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741898_1074{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 49 2016-08-15 14:50:16,463 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(197): Finished writing region info for table ns1:test-1471297750223 2016-08-15 14:50:16,464 INFO [ProcedureExecutor-5] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140011 2016-08-15 14:50:16,467 DEBUG [ProcedureExecutor-5] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:50:16,468 INFO [ProcedureExecutor-5] master.IncrementalTableBackupProcedure(125): Incremental copy is starting. 2016-08-15 14:50:16,468 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (1832886937) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:16,468 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56107 because read count=-1. Number of active connections: 10 2016-08-15 14:50:16,472 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(307): Doing COPY_TYPE_DISTCP 2016-08-15 14:50:16,501 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(317): DistCp options: [hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297731200, hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297733730, hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531, hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531, hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297754595, hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297757231, hdfs://localhost:55740/backupUT/backup_1471297810954/WALs] 2016-08-15 14:50:16,728 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741899_1075{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 981 2016-08-15 14:50:17,163 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741900_1076{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 1629 2016-08-15 14:50:17,589 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741901_1077{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 91 2016-08-15 14:50:18,012 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741902_1078{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 91 2016-08-15 14:50:18,437 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741903_1079{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 21930 2016-08-15 14:50:18,865 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741904_1080{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 22132 2016-08-15 14:50:19,220 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-15 14:50:19,712 INFO [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService$BackupDistCp(247): Progress: 100.0% subTask: 1.0 mapProgress: 1.0 2016-08-15 14:50:19,712 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1471297810954 set status=RUNNING 2016-08-15 14:50:19,714 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297812387 2016-08-15 14:50:19,715 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(140): Backup progress data "100%" has been updated to hbase:backup for backup_1471297810954 2016-08-15 14:50:19,715 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService$BackupDistCp(256): Backup progress data updated to hbase:backup: "Progress: 100.0% - 46854 bytes copied." 2016-08-15 14:50:19,715 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService$BackupDistCp(271): DistCp job-id: job_local1151087124_0005 completed: true true 2016-08-15 14:50:19,723 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService$BackupDistCp(274): Counters: 23 File System Counters FILE: Number of bytes read=94399874 FILE: Number of bytes written=94694222 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=118344 HDFS: Number of bytes written=2331513 HDFS: Number of read operations=604 HDFS: Number of large read operations=0 HDFS: Number of write operations=285 Map-Reduce Framework Map input records=6 Map output records=0 Input split bytes=262 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=0 Total committed heap usage (bytes)=1216872448 File Input Format Counters Bytes Read=1998 File Output Format Counters Bytes Written=0 org.apache.hadoop.tools.mapred.CopyMapper$Counter BYTESCOPIED=46854 BYTESEXPECTED=46854 COPY=6 2016-08-15 14:50:19,723 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(325): list of hdfs://localhost:55740/backupUT/backup_1471297810954/WALs for distcp 0 2016-08-15 14:50:19,726 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(330): LocatedFileStatus{path=hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471297817990; access_time=1471297817581; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-15 14:50:19,726 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(330): LocatedFileStatus{path=hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297731200; isDirectory=false; length=981; replication=1; blocksize=134217728; modification_time=1471297817134; access_time=1471297816718; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-15 14:50:19,726 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(330): LocatedFileStatus{path=hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471297818417; access_time=1471297818003; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-15 14:50:19,726 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(330): LocatedFileStatus{path=hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297733730; isDirectory=false; length=1629; replication=1; blocksize=134217728; modification_time=1471297817569; access_time=1471297817153; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-15 14:50:19,726 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(330): LocatedFileStatus{path=hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297754595; isDirectory=false; length=21930; replication=1; blocksize=134217728; modification_time=1471297818843; access_time=1471297818428; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-15 14:50:19,726 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(330): LocatedFileStatus{path=hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297757231; isDirectory=false; length=22132; replication=1; blocksize=134217728; modification_time=1471297819268; access_time=1471297818855; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-15 14:50:19,730 INFO [ProcedureExecutor-5] master.IncrementalTableBackupProcedure(176): Incremental copy from hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297731200,hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297733730,hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531,hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531,hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297754595,hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297757231 to hdfs://localhost:55740/backupUT/backup_1471297810954/WALs finished. 2016-08-15 14:50:19,730 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(480): add WAL files to hbase:backup: backup_1471297810954 hdfs://localhost:55740/backupUT files [hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297731200,hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297733730,hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531,hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531,hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297754595,hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297757231] 2016-08-15 14:50:19,730 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297731200 2016-08-15 14:50:19,730 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297733730 2016-08-15 14:50:19,730 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531 2016-08-15 14:50:19,730 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531 2016-08-15 14:50:19,730 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297754595 2016-08-15 14:50:19,730 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297757231 2016-08-15 14:50:19,732 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297812387 2016-08-15 14:50:19,843 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(365): read RS log ts from hbase:backup for root=hdfs://localhost:55740/backupUT 2016-08-15 14:50:19,848 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(337): write RS log time stamps to hbase:backup for tables [ns2:test-14712977502231,ns3:test-14712977502232,ns4:test-14712977502233,ns1:test-1471297750223] 2016-08-15 14:50:19,850 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297812387 2016-08-15 14:50:19,851 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(365): read RS log ts from hbase:backup for root=hdfs://localhost:55740/backupUT 2016-08-15 14:50:19,855 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(205): write backup start code to hbase:backup 1471297762531 2016-08-15 14:50:19,856 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297812387 2016-08-15 14:50:19,857 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-15 14:50:19,857 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471297810954 2016-08-15 14:50:19,857 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-15 14:50:19,857 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-15 14:50:19,862 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-15 14:50:19,862 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:55740/backupUT backup_1471297810954 INCREMENTAL 2016-08-15 14:50:19,863 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471297810954 2016-08-15 14:50:19,863 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-15 14:50:19,863 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-15 14:50:19,866 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-15 14:50:19,872 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741905_1081{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:50:19,873 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:55740/backupUT/backup_1471297810954/ns2/test-14712977502231/.backup.manifest 2016-08-15 14:50:19,873 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-15 14:50:19,873 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471297810954 2016-08-15 14:50:19,873 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-15 14:50:19,873 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-15 14:50:19,876 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-15 14:50:19,876 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:55740/backupUT backup_1471297810954 INCREMENTAL 2016-08-15 14:50:19,876 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471297810954 2016-08-15 14:50:19,876 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-15 14:50:19,876 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-15 14:50:19,878 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-15 14:50:19,885 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741906_1082{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 814 2016-08-15 14:50:20,290 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:55740/backupUT/backup_1471297810954/ns3/test-14712977502232/.backup.manifest 2016-08-15 14:50:20,290 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-15 14:50:20,290 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471297810954 2016-08-15 14:50:20,290 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-15 14:50:20,290 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-15 14:50:20,295 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-15 14:50:20,295 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:55740/backupUT backup_1471297810954 INCREMENTAL 2016-08-15 14:50:20,295 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471297810954 2016-08-15 14:50:20,295 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-15 14:50:20,295 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-15 14:50:20,299 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-15 14:50:20,306 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741907_1083{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 814 2016-08-15 14:50:20,709 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:55740/backupUT/backup_1471297810954/ns4/test-14712977502233/.backup.manifest 2016-08-15 14:50:20,709 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-15 14:50:20,709 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471297810954 2016-08-15 14:50:20,709 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-15 14:50:20,709 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-15 14:50:20,714 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-15 14:50:20,714 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:55740/backupUT backup_1471297810954 INCREMENTAL 2016-08-15 14:50:20,714 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471297810954 2016-08-15 14:50:20,714 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-15 14:50:20,714 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-15 14:50:20,717 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-15 14:50:20,725 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741908_1084{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 811 2016-08-15 14:50:21,131 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:55740/backupUT/backup_1471297810954/ns1/test-1471297750223/.backup.manifest 2016-08-15 14:50:21,132 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 4 tables exist in table set. 2016-08-15 14:50:21,132 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471297810954 2016-08-15 14:50:21,132 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-15 14:50:21,132 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-15 14:50:21,136 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-15 14:50:21,136 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:55740/backupUT backup_1471297810954 INCREMENTAL 2016-08-15 14:50:21,145 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741909_1085{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 1052 2016-08-15 14:50:21,552 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/.backup.manifest 2016-08-15 14:50:21,552 DEBUG [ProcedureExecutor-5] master.FullTableBackupProcedure(439): in-fly convert code here, provided by future jira 2016-08-15 14:50:21,552 DEBUG [ProcedureExecutor-5] master.FullTableBackupProcedure(447): Backup backup_1471297810954 finished: type=INCREMENTAL,tablelist=ns2:test-14712977502231;ns3:test-14712977502232;ns4:test-14712977502233;ns1:test-1471297750223,targetRootDir=hdfs://localhost:55740/backupUT,startts=1471297811082,completets=1471297819857,bytescopied=0 2016-08-15 14:50:21,552 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1471297810954 set status=COMPLETE 2016-08-15 14:50:21,555 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297812387 2016-08-15 14:50:21,558 INFO [ProcedureExecutor-5] master.FullTableBackupProcedure(462): Backup backup_1471297810954 completed. 2016-08-15 14:50:21,667 DEBUG [ProcedureExecutor-5] lock.ZKInterProcessLockBase(328): Released /1/table-lock/hbase:backup/write-master:557550000000002 2016-08-15 14:50:21,668 DEBUG [ProcedureExecutor-5] procedure2.ProcedureExecutor(870): Procedure completed in 10.5910sec: IncrementalTableBackupProcedure (targetRootDir=hdfs://localhost:55740/backupUT; backupId=backup_1471297810954; tables=ns1:test-1471297750223,ns2:test-14712977502231,ns3:test-14712977502232,ns4:test-14712977502233) id=14 state=FINISHED 2016-08-15 14:50:29,224 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-15 14:50:29,224 DEBUG [main] impl.BackupSystemTable(157): read backup status from hbase:backup for: backup_1471297810954 2016-08-15 14:50:29,231 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:55740/backupUT/backup_1471297762157/ns1/test-1471297750223/.backup.manifest 2016-08-15 14:50:29,236 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471297762157 2016-08-15 14:50:29,236 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471297762157/ns1/test-1471297750223/.backup.manifest 2016-08-15 14:50:29,237 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:55740/backupUT/backup_1471297762157/ns2/test-14712977502231/.backup.manifest 2016-08-15 14:50:29,240 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471297762157 2016-08-15 14:50:29,240 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471297762157/ns2/test-14712977502231/.backup.manifest 2016-08-15 14:50:29,240 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:55740/backupUT/backup_1471297762157/ns3/test-14712977502232/.backup.manifest 2016-08-15 14:50:29,243 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471297762157 2016-08-15 14:50:29,243 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471297762157/ns3/test-14712977502232/.backup.manifest 2016-08-15 14:50:29,244 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:55740/backupUT/backup_1471297762157/ns4/test-14712977502233/.backup.manifest 2016-08-15 14:50:29,248 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471297762157 2016-08-15 14:50:29,248 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471297762157/ns4/test-14712977502233/.backup.manifest 2016-08-15 14:50:29,249 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4aa4a003 connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:29,253 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x4aa4a0030x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:29,254 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4c632a2a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:50:29,254 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:50:29,255 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x4aa4a003-0x156902d8a140016 connected 2016-08-15 14:50:29,255 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:50:29,257 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:29,257 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56155; # active connections: 10 2016-08-15 14:50:29,258 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:29,258 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56155 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:29,259 INFO [main] impl.RestoreClientImpl(167): HBase table ns1:table1_restore does not exist. It will be created during restore process 2016-08-15 14:50:29,259 INFO [main] impl.RestoreClientImpl(167): HBase table ns2:table2_restore does not exist. It will be created during restore process 2016-08-15 14:50:29,260 INFO [main] impl.RestoreClientImpl(167): HBase table ns3:table3_restore does not exist. It will be created during restore process 2016-08-15 14:50:29,261 INFO [main] impl.RestoreClientImpl(167): HBase table ns4:table4_restore does not exist. It will be created during restore process 2016-08-15 14:50:29,261 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140016 2016-08-15 14:50:29,262 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:50:29,265 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (-536335855) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:29,265 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56155 because read count=-1. Number of active connections: 10 2016-08-15 14:50:29,266 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-15 14:50:29,269 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:55740/backupUT/backup_1471297762157/ns1/test-1471297750223/.backup.manifest 2016-08-15 14:50:29,273 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471297762157 2016-08-15 14:50:29,273 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471297762157/ns1/test-1471297750223/.backup.manifest 2016-08-15 14:50:29,273 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns1:test-1471297750223' to 'ns1:table1_restore' from full backup image hdfs://localhost:55740/backupUT/backup_1471297762157/ns1/test-1471297750223 2016-08-15 14:50:29,284 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x139b8cd2 connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:29,286 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x139b8cd20x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:29,287 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3ad8600a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:50:29,287 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:50:29,287 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:50:29,288 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x139b8cd2-0x156902d8a140017 connected 2016-08-15 14:50:29,289 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:29,290 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56159; # active connections: 10 2016-08-15 14:50:29,290 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:29,290 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56159 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:29,291 INFO [main] util.RestoreServerUtil(596): Creating target table 'ns1:table1_restore' 2016-08-15 14:50:29,291 DEBUG [main] util.RestoreServerUtil(495): Parsing region dir: hdfs://localhost:55740/backupUT/backup_1471297762157/ns1/test-1471297750223/archive/data/ns1/test-1471297750223/d0d5e63c01f66001cc1c60dbba147803 2016-08-15 14:50:29,292 DEBUG [main] util.RestoreServerUtil(525): Parsing family dir [hdfs://localhost:55740/backupUT/backup_1471297762157/ns1/test-1471297750223/archive/data/ns1/test-1471297750223/d0d5e63c01f66001cc1c60dbba147803/f in region [hdfs://localhost:55740/backupUT/backup_1471297762157/ns1/test-1471297750223/archive/data/ns1/test-1471297750223/d0d5e63c01f66001cc1c60dbba147803] 2016-08-15 14:50:29,293 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1087208, freeSize=1042875096, maxSize=1043962304, heapSize=1087208, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-15 14:50:29,296 DEBUG [main] util.RestoreServerUtil(545): Trying to figure out region boundaries hfile=hdfs://localhost:55740/backupUT/backup_1471297762157/ns1/test-1471297750223/archive/data/ns1/test-1471297750223/d0d5e63c01f66001cc1c60dbba147803/f/d877eabaa256430aadac750bc00ca29f first=row0 last=row99 2016-08-15 14:50:29,303 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:50:29,303 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56160; # active connections: 11 2016-08-15 14:50:29,304 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:29,304 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56160 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:29,306 INFO [B.defaultRpcServer.handler=3,queue=0,port=55755] master.HMaster(1495): Client=tyu//10.22.9.171 create 'ns1:table1_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-15 14:50:29,411 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns1:table1_restore) id=15 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-15 14:50:29,414 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-15 14:50:29,416 DEBUG [ProcedureExecutor-6] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns1:table1_restore/write-master:557550000000000 2016-08-15 14:50:29,518 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-15 14:50:29,537 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741910_1086{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 290 2016-08-15 14:50:29,721 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-15 14:50:29,948 DEBUG [ProcedureExecutor-6] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns1/table1_restore/.tabledesc/.tableinfo.0000000001 2016-08-15 14:50:29,949 INFO [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(6162): creating HRegion ns1:table1_restore HTD == 'ns1:table1_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp Table name == ns1:table1_restore 2016-08-15 14:50:29,958 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741911_1087{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 45 2016-08-15 14:50:30,026 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-15 14:50:30,364 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(736): Instantiated ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:30,364 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1419): Closing ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478.: disabling compactions & flushes 2016-08-15 14:50:30,364 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1446): Updates disabled for region ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:30,365 INFO [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1552): Closed ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:30,478 DEBUG [ProcedureExecutor-6] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478."} 2016-08-15 14:50:30,480 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:30,480 INFO [ProcedureExecutor-6] hbase.MetaTableAccessor(1571): Added 1 2016-08-15 14:50:30,531 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-15 14:50:30,589 INFO [ProcedureExecutor-6] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,55757,1471297725443 2016-08-15 14:50:30,589 ERROR [ProcedureExecutor-6] master.TableStateManager(134): Unable to get table ns1:table1_restore state org.apache.hadoop.hbase.TableNotFoundException: ns1:table1_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-15 14:50:30,590 INFO [ProcedureExecutor-6] master.RegionStates(1106): Transition {1a2af1efddb74842cc0d4b4b051d5478 state=OFFLINE, ts=1471297830588, server=null} to {1a2af1efddb74842cc0d4b4b051d5478 state=PENDING_OPEN, ts=1471297830590, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:50:30,590 INFO [ProcedureExecutor-6] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. with state=PENDING_OPEN, sn=10.22.9.171,55757,1471297725443 2016-08-15 14:50:30,591 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:30,592 INFO [PriorityRpcServer.handler=2,queue=0,port=55757] regionserver.RSRpcServices(1666): Open ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:30,601 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] regionserver.HRegion(6339): Opening region: {ENCODED => 1a2af1efddb74842cc0d4b4b051d5478, NAME => 'ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478.', STARTKEY => '', ENDKEY => ''} 2016-08-15 14:50:30,601 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table1_restore 1a2af1efddb74842cc0d4b4b051d5478 2016-08-15 14:50:30,602 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] regionserver.HRegion(736): Instantiated ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:30,604 INFO [StoreOpener-1a2af1efddb74842cc0d4b4b051d5478-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1087208, freeSize=1042875096, maxSize=1043962304, heapSize=1087208, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-15 14:50:30,605 INFO [StoreOpener-1a2af1efddb74842cc0d4b4b051d5478-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-15 14:50:30,605 DEBUG [StoreOpener-1a2af1efddb74842cc0d4b4b051d5478-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/f 2016-08-15 14:50:30,606 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478 2016-08-15 14:50:30,611 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-15 14:50:30,611 INFO [RS_OPEN_REGION-10.22.9.171:55757-2] regionserver.HRegion(871): Onlined 1a2af1efddb74842cc0d4b4b051d5478; next sequenceid=2 2016-08-15 14:50:30,615 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297811964 2016-08-15 14:50:30,616 INFO [PostOpenDeployTasks:1a2af1efddb74842cc0d4b4b051d5478] regionserver.HRegionServer(1952): Post open deploy tasks for ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:30,617 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=55755] master.AssignmentManager(2884): Got transition OPENED for {1a2af1efddb74842cc0d4b4b051d5478 state=PENDING_OPEN, ts=1471297830590, server=10.22.9.171,55757,1471297725443} from 10.22.9.171,55757,1471297725443 2016-08-15 14:50:30,617 INFO [B.defaultRpcServer.handler=1,queue=0,port=55755] master.RegionStates(1106): Transition {1a2af1efddb74842cc0d4b4b051d5478 state=PENDING_OPEN, ts=1471297830590, server=10.22.9.171,55757,1471297725443} to {1a2af1efddb74842cc0d4b4b051d5478 state=OPEN, ts=1471297830617, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:50:30,617 INFO [B.defaultRpcServer.handler=1,queue=0,port=55755] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. with state=OPEN, openSeqNum=2, server=10.22.9.171,55757,1471297725443 2016-08-15 14:50:30,618 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:30,618 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=55755] master.RegionStates(452): Onlined 1a2af1efddb74842cc0d4b4b051d5478 on 10.22.9.171,55757,1471297725443 2016-08-15 14:50:30,619 DEBUG [ProcedureExecutor-6] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,55757,1471297725443 2016-08-15 14:50:30,619 DEBUG [ProcedureExecutor-6] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471297830619,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:table1_restore"} 2016-08-15 14:50:30,619 ERROR [B.defaultRpcServer.handler=1,queue=0,port=55755] master.TableStateManager(134): Unable to get table ns1:table1_restore state org.apache.hadoop.hbase.TableNotFoundException: ns1:table1_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-15 14:50:30,619 DEBUG [PostOpenDeployTasks:1a2af1efddb74842cc0d4b4b051d5478] regionserver.HRegionServer(1979): Finished post open deploy task for ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:30,620 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] handler.OpenRegionHandler(126): Opened ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. on 10.22.9.171,55757,1471297725443 2016-08-15 14:50:30,620 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:30,621 INFO [ProcedureExecutor-6] hbase.MetaTableAccessor(1700): Updated table ns1:table1_restore state to ENABLED in META 2016-08-15 14:50:30,945 DEBUG [ProcedureExecutor-6] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns1:table1_restore/write-master:557550000000000 2016-08-15 14:50:30,946 DEBUG [ProcedureExecutor-6] procedure2.ProcedureExecutor(870): Procedure completed in 1.5320sec: CreateTableProcedure (table=ns1:table1_restore) id=15 owner=tyu state=FINISHED 2016-08-15 14:50:31,534 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-15 14:50:31,534 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns1:table1_restore completed 2016-08-15 14:50:31,534 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:50:31,535 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140017 2016-08-15 14:50:31,538 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:50:31,539 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56160 because read count=-1. Number of active connections: 11 2016-08-15 14:50:31,539 DEBUG [main] util.RestoreServerUtil(255): cluster hold the backup image: hdfs://localhost:55740; local cluster node: hdfs://localhost:55740 2016-08-15 14:50:31,539 DEBUG [main] util.RestoreServerUtil(261): File hdfs://localhost:55740/backupUT/backup_1471297762157/ns1/test-1471297750223/archive/data/ns1/test-1471297750223 on local cluster, back it up before restore 2016-08-15 14:50:31,539 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56159 because read count=-1. Number of active connections: 11 2016-08-15 14:50:31,539 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel$8(566): IPC Client (-2113599593) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:31,539 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel$8(566): IPC Client (-1511314640) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:31,559 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741912_1088{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 12093 2016-08-15 14:50:31,965 DEBUG [main] util.RestoreServerUtil(271): Copied to temporary path on local cluster: /user/tyu/hbase-staging/restore 2016-08-15 14:50:31,967 DEBUG [main] util.RestoreServerUtil(355): TableArchivePath for bulkload using tempPath: /user/tyu/hbase-staging/restore 2016-08-15 14:50:31,989 DEBUG [main] util.RestoreServerUtil(363): Restoring HFiles from directory hdfs://localhost:55740/user/tyu/hbase-staging/restore/d0d5e63c01f66001cc1c60dbba147803 2016-08-15 14:50:31,989 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7df9b28b connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:31,994 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x7df9b28b0x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:31,995 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3eb06ff5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:50:31,995 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:50:31,995 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:50:31,996 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x7df9b28b-0x156902d8a140018 connected 2016-08-15 14:50:31,997 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:31,998 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56167; # active connections: 10 2016-08-15 14:50:31,998 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:31,999 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56167 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:32,004 DEBUG [main] client.ConnectionImplementation(604): Table ns1:table1_restore should be available 2016-08-15 14:50:32,013 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:50:32,013 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56168; # active connections: 11 2016-08-15 14:50:32,014 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:32,014 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56168 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:32,029 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1087208, freeSize=1042875096, maxSize=1043962304, heapSize=1087208, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-15 14:50:32,033 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:55740/user/tyu/hbase-staging/restore/d0d5e63c01f66001cc1c60dbba147803/f/d877eabaa256430aadac750bc00ca29f first=row0 last=row99 2016-08-15 14:50:32,047 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478., hostname=10.22.9.171,55757,1471297725443, seqNum=2 for row with hfile group [{[B@46f41b04,hdfs://localhost:55740/user/tyu/hbase-staging/restore/d0d5e63c01f66001cc1c60dbba147803/f/d877eabaa256430aadac750bc00ca29f}] 2016-08-15 14:50:32,056 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:32,056 DEBUG [RpcServer.listener,port=55757] ipc.RpcServer$Listener(880): RpcServer.listener,port=55757: connection from 10.22.9.171:56169; # active connections: 7 2016-08-15 14:50:32,057 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:32,057 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56169 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:32,057 INFO [B.defaultRpcServer.handler=2,queue=0,port=55757] regionserver.HStore(670): Validating hfile at hdfs://localhost:55740/user/tyu/hbase-staging/restore/d0d5e63c01f66001cc1c60dbba147803/f/d877eabaa256430aadac750bc00ca29f for inclusion in store f region ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:32,062 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55757] regionserver.HStore(682): HFile bounds: first=row0 last=row99 2016-08-15 14:50:32,062 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55757] regionserver.HStore(684): Region bounds: first= last= 2016-08-15 14:50:32,065 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55757] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:55740/user/tyu/hbase-staging/restore/d0d5e63c01f66001cc1c60dbba147803/f/d877eabaa256430aadac750bc00ca29f as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/f/15ef6fbfedaf4639b3ee9276dae41731_SeqId_4_ 2016-08-15 14:50:32,066 INFO [B.defaultRpcServer.handler=2,queue=0,port=55757] regionserver.HStore(742): Loaded HFile hdfs://localhost:55740/user/tyu/hbase-staging/restore/d0d5e63c01f66001cc1c60dbba147803/f/d877eabaa256430aadac750bc00ca29f into store 'f' as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/f/15ef6fbfedaf4639b3ee9276dae41731_SeqId_4_ - updating store file list. 2016-08-15 14:50:32,072 INFO [B.defaultRpcServer.handler=2,queue=0,port=55757] regionserver.HStore(777): Loaded HFile hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/f/15ef6fbfedaf4639b3ee9276dae41731_SeqId_4_ into store 'f 2016-08-15 14:50:32,072 INFO [B.defaultRpcServer.handler=2,queue=0,port=55757] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:55740/user/tyu/hbase-staging/restore/d0d5e63c01f66001cc1c60dbba147803/f/d877eabaa256430aadac750bc00ca29f into store f (new location: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/f/15ef6fbfedaf4639b3ee9276dae41731_SeqId_4_) 2016-08-15 14:50:32,077 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297811964 2016-08-15 14:50:32,079 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:50:32,079 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140018 2016-08-15 14:50:32,080 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:50:32,081 INFO [main] impl.RestoreClientImpl(292): ns1:test-1471297750223 has been successfully restored to ns1:table1_restore 2016-08-15 14:50:32,081 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel$8(566): IPC Client (1322702797) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:32,081 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-15 14:50:32,081 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56168 because read count=-1. Number of active connections: 11 2016-08-15 14:50:32,081 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel$8(566): IPC Client (446776698) to /10.22.9.171:55757 from tyu: closed 2016-08-15 14:50:32,081 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel$8(566): IPC Client (59151692) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:32,081 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Listener(912): RpcServer.listener,port=55757: DISCONNECTING client 10.22.9.171:56169 because read count=-1. Number of active connections: 7 2016-08-15 14:50:32,081 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56167 because read count=-1. Number of active connections: 11 2016-08-15 14:50:32,081 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471297762157 hdfs://localhost:55740/backupUT/backup_1471297762157/ns1/test-1471297750223/ 2016-08-15 14:50:32,082 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-15 14:50:32,083 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:55740/backupUT/backup_1471297762157/ns2/test-14712977502231/.backup.manifest 2016-08-15 14:50:32,085 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471297762157 2016-08-15 14:50:32,086 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471297762157/ns2/test-14712977502231/.backup.manifest 2016-08-15 14:50:32,086 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns2:test-14712977502231' to 'ns2:table2_restore' from full backup image hdfs://localhost:55740/backupUT/backup_1471297762157/ns2/test-14712977502231 2016-08-15 14:50:32,095 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1ab4c4b connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:32,097 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x1ab4c4b0x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:32,097 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6c528304, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:50:32,098 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:50:32,098 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:50:32,099 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x1ab4c4b-0x156902d8a140019 connected 2016-08-15 14:50:32,100 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:32,100 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56173; # active connections: 10 2016-08-15 14:50:32,101 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:32,101 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56173 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:32,102 INFO [main] util.RestoreServerUtil(596): Creating target table 'ns2:table2_restore' 2016-08-15 14:50:32,102 DEBUG [main] util.RestoreServerUtil(495): Parsing region dir: hdfs://localhost:55740/backupUT/backup_1471297762157/ns2/test-14712977502231/archive/data/ns2/test-14712977502231/7ac1188f2e9c4e31e67f0d3df5f7670d 2016-08-15 14:50:32,103 DEBUG [main] util.RestoreServerUtil(525): Parsing family dir [hdfs://localhost:55740/backupUT/backup_1471297762157/ns2/test-14712977502231/archive/data/ns2/test-14712977502231/7ac1188f2e9c4e31e67f0d3df5f7670d/f in region [hdfs://localhost:55740/backupUT/backup_1471297762157/ns2/test-14712977502231/archive/data/ns2/test-14712977502231/7ac1188f2e9c4e31e67f0d3df5f7670d] 2016-08-15 14:50:32,104 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1087208, freeSize=1042875096, maxSize=1043962304, heapSize=1087208, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-15 14:50:32,107 DEBUG [main] util.RestoreServerUtil(545): Trying to figure out region boundaries hfile=hdfs://localhost:55740/backupUT/backup_1471297762157/ns2/test-14712977502231/archive/data/ns2/test-14712977502231/7ac1188f2e9c4e31e67f0d3df5f7670d/f/d40b937f84a74f16a772129b5836b4f2 first=row0 last=row99 2016-08-15 14:50:32,109 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:50:32,109 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56174; # active connections: 11 2016-08-15 14:50:32,110 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:32,110 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56174 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:32,111 INFO [B.defaultRpcServer.handler=0,queue=0,port=55755] master.HMaster(1495): Client=tyu//10.22.9.171 create 'ns2:table2_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-15 14:50:32,218 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55755] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns2:table2_restore) id=16 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-15 14:50:32,222 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-15 14:50:32,223 DEBUG [ProcedureExecutor-7] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns2:table2_restore/write-master:557550000000000 2016-08-15 14:50:32,326 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-15 14:50:32,345 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741913_1089{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 290 2016-08-15 14:50:32,531 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-15 14:50:32,755 DEBUG [ProcedureExecutor-7] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns2/table2_restore/.tabledesc/.tableinfo.0000000001 2016-08-15 14:50:32,757 INFO [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(6162): creating HRegion ns2:table2_restore HTD == 'ns2:table2_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp Table name == ns2:table2_restore 2016-08-15 14:50:32,766 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741914_1090{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 45 2016-08-15 14:50:32,837 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-15 14:50:33,171 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(736): Instantiated ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:50:33,172 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1419): Closing ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825.: disabling compactions & flushes 2016-08-15 14:50:33,172 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1446): Updates disabled for region ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:50:33,172 INFO [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1552): Closed ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:50:33,284 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825."} 2016-08-15 14:50:33,285 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:33,286 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1571): Added 1 2016-08-15 14:50:33,347 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-15 14:50:33,392 INFO [ProcedureExecutor-7] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,55757,1471297725443 2016-08-15 14:50:33,392 ERROR [ProcedureExecutor-7] master.TableStateManager(134): Unable to get table ns2:table2_restore state org.apache.hadoop.hbase.TableNotFoundException: ns2:table2_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-15 14:50:33,392 INFO [ProcedureExecutor-7] master.RegionStates(1106): Transition {398ca33ca6e640575cac0c2baa029825 state=OFFLINE, ts=1471297833391, server=null} to {398ca33ca6e640575cac0c2baa029825 state=PENDING_OPEN, ts=1471297833392, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:50:33,393 INFO [ProcedureExecutor-7] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. with state=PENDING_OPEN, sn=10.22.9.171,55757,1471297725443 2016-08-15 14:50:33,393 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:33,395 INFO [PriorityRpcServer.handler=3,queue=1,port=55757] regionserver.RSRpcServices(1666): Open ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:50:33,399 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-0] regionserver.HRegion(6339): Opening region: {ENCODED => 398ca33ca6e640575cac0c2baa029825, NAME => 'ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825.', STARTKEY => '', ENDKEY => ''} 2016-08-15 14:50:33,400 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table2_restore 398ca33ca6e640575cac0c2baa029825 2016-08-15 14:50:33,400 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-0] regionserver.HRegion(736): Instantiated ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:50:33,402 INFO [StoreOpener-398ca33ca6e640575cac0c2baa029825-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1087208, freeSize=1042875096, maxSize=1043962304, heapSize=1087208, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-15 14:50:33,403 INFO [StoreOpener-398ca33ca6e640575cac0c2baa029825-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-15 14:50:33,403 DEBUG [StoreOpener-398ca33ca6e640575cac0c2baa029825-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/f 2016-08-15 14:50:33,404 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825 2016-08-15 14:50:33,408 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-15 14:50:33,409 INFO [RS_OPEN_REGION-10.22.9.171:55757-0] regionserver.HRegion(871): Onlined 398ca33ca6e640575cac0c2baa029825; next sequenceid=2 2016-08-15 14:50:33,409 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297811542 2016-08-15 14:50:33,410 INFO [PostOpenDeployTasks:398ca33ca6e640575cac0c2baa029825] regionserver.HRegionServer(1952): Post open deploy tasks for ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:50:33,411 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=55755] master.AssignmentManager(2884): Got transition OPENED for {398ca33ca6e640575cac0c2baa029825 state=PENDING_OPEN, ts=1471297833392, server=10.22.9.171,55757,1471297725443} from 10.22.9.171,55757,1471297725443 2016-08-15 14:50:33,411 INFO [B.defaultRpcServer.handler=1,queue=0,port=55755] master.RegionStates(1106): Transition {398ca33ca6e640575cac0c2baa029825 state=PENDING_OPEN, ts=1471297833392, server=10.22.9.171,55757,1471297725443} to {398ca33ca6e640575cac0c2baa029825 state=OPEN, ts=1471297833411, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:50:33,411 INFO [B.defaultRpcServer.handler=1,queue=0,port=55755] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. with state=OPEN, openSeqNum=2, server=10.22.9.171,55757,1471297725443 2016-08-15 14:50:33,411 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:33,412 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=55755] master.RegionStates(452): Onlined 398ca33ca6e640575cac0c2baa029825 on 10.22.9.171,55757,1471297725443 2016-08-15 14:50:33,412 DEBUG [ProcedureExecutor-7] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,55757,1471297725443 2016-08-15 14:50:33,412 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471297833412,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:table2_restore"} 2016-08-15 14:50:33,412 ERROR [B.defaultRpcServer.handler=1,queue=0,port=55755] master.TableStateManager(134): Unable to get table ns2:table2_restore state org.apache.hadoop.hbase.TableNotFoundException: ns2:table2_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-15 14:50:33,413 DEBUG [PostOpenDeployTasks:398ca33ca6e640575cac0c2baa029825] regionserver.HRegionServer(1979): Finished post open deploy task for ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:50:33,413 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-0] handler.OpenRegionHandler(126): Opened ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. on 10.22.9.171,55757,1471297725443 2016-08-15 14:50:33,413 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:33,414 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1700): Updated table ns2:table2_restore state to ENABLED in META 2016-08-15 14:50:33,741 DEBUG [ProcedureExecutor-7] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns2:table2_restore/write-master:557550000000000 2016-08-15 14:50:33,741 DEBUG [ProcedureExecutor-7] procedure2.ProcedureExecutor(870): Procedure completed in 1.5240sec: CreateTableProcedure (table=ns2:table2_restore) id=16 owner=tyu state=FINISHED 2016-08-15 14:50:34,353 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-15 14:50:34,353 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns2:table2_restore completed 2016-08-15 14:50:34,353 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:50:34,353 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140019 2016-08-15 14:50:34,356 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:50:34,357 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel$8(566): IPC Client (1070074624) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:34,357 DEBUG [main] util.RestoreServerUtil(255): cluster hold the backup image: hdfs://localhost:55740; local cluster node: hdfs://localhost:55740 2016-08-15 14:50:34,357 DEBUG [main] util.RestoreServerUtil(261): File hdfs://localhost:55740/backupUT/backup_1471297762157/ns2/test-14712977502231/archive/data/ns2/test-14712977502231 on local cluster, back it up before restore 2016-08-15 14:50:34,357 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel$8(566): IPC Client (-1443711927) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:34,357 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56174 because read count=-1. Number of active connections: 11 2016-08-15 14:50:34,357 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56173 because read count=-1. Number of active connections: 11 2016-08-15 14:50:34,373 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741915_1091{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:50:34,377 DEBUG [main] util.RestoreServerUtil(271): Copied to temporary path on local cluster: /user/tyu/hbase-staging/restore 2016-08-15 14:50:34,378 DEBUG [main] util.RestoreServerUtil(355): TableArchivePath for bulkload using tempPath: /user/tyu/hbase-staging/restore 2016-08-15 14:50:34,395 DEBUG [main] util.RestoreServerUtil(363): Restoring HFiles from directory hdfs://localhost:55740/user/tyu/hbase-staging/restore/7ac1188f2e9c4e31e67f0d3df5f7670d 2016-08-15 14:50:34,395 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5aa7055e connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:34,398 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x5aa7055e0x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:34,399 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5745854e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:50:34,399 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:50:34,399 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:50:34,400 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x5aa7055e-0x156902d8a14001a connected 2016-08-15 14:50:34,402 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:34,402 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56181; # active connections: 10 2016-08-15 14:50:34,403 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:34,403 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56181 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:34,409 DEBUG [main] client.ConnectionImplementation(604): Table ns2:table2_restore should be available 2016-08-15 14:50:34,415 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:50:34,415 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56182; # active connections: 11 2016-08-15 14:50:34,415 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:34,416 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56182 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:34,420 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1087208, freeSize=1042875096, maxSize=1043962304, heapSize=1087208, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-15 14:50:34,423 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:55740/user/tyu/hbase-staging/restore/7ac1188f2e9c4e31e67f0d3df5f7670d/f/d40b937f84a74f16a772129b5836b4f2 first=row0 last=row99 2016-08-15 14:50:34,426 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825., hostname=10.22.9.171,55757,1471297725443, seqNum=2 for row with hfile group [{[B@40047522,hdfs://localhost:55740/user/tyu/hbase-staging/restore/7ac1188f2e9c4e31e67f0d3df5f7670d/f/d40b937f84a74f16a772129b5836b4f2}] 2016-08-15 14:50:34,429 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:34,429 DEBUG [RpcServer.listener,port=55757] ipc.RpcServer$Listener(880): RpcServer.listener,port=55757: connection from 10.22.9.171:56183; # active connections: 7 2016-08-15 14:50:34,430 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:34,430 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56183 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:34,430 INFO [B.defaultRpcServer.handler=4,queue=0,port=55757] regionserver.HStore(670): Validating hfile at hdfs://localhost:55740/user/tyu/hbase-staging/restore/7ac1188f2e9c4e31e67f0d3df5f7670d/f/d40b937f84a74f16a772129b5836b4f2 for inclusion in store f region ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:50:34,433 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55757] regionserver.HStore(682): HFile bounds: first=row0 last=row99 2016-08-15 14:50:34,433 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55757] regionserver.HStore(684): Region bounds: first= last= 2016-08-15 14:50:34,435 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55757] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:55740/user/tyu/hbase-staging/restore/7ac1188f2e9c4e31e67f0d3df5f7670d/f/d40b937f84a74f16a772129b5836b4f2 as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/f/9e508b28cd4545e68980bb32d76801e5_SeqId_4_ 2016-08-15 14:50:34,436 INFO [B.defaultRpcServer.handler=4,queue=0,port=55757] regionserver.HStore(742): Loaded HFile hdfs://localhost:55740/user/tyu/hbase-staging/restore/7ac1188f2e9c4e31e67f0d3df5f7670d/f/d40b937f84a74f16a772129b5836b4f2 into store 'f' as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/f/9e508b28cd4545e68980bb32d76801e5_SeqId_4_ - updating store file list. 2016-08-15 14:50:34,441 INFO [B.defaultRpcServer.handler=4,queue=0,port=55757] regionserver.HStore(777): Loaded HFile hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/f/9e508b28cd4545e68980bb32d76801e5_SeqId_4_ into store 'f 2016-08-15 14:50:34,441 INFO [B.defaultRpcServer.handler=4,queue=0,port=55757] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:55740/user/tyu/hbase-staging/restore/7ac1188f2e9c4e31e67f0d3df5f7670d/f/d40b937f84a74f16a772129b5836b4f2 into store f (new location: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/f/9e508b28cd4545e68980bb32d76801e5_SeqId_4_) 2016-08-15 14:50:34,441 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297811542 2016-08-15 14:50:34,442 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:50:34,442 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a14001a 2016-08-15 14:50:34,443 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:50:34,443 INFO [main] impl.RestoreClientImpl(292): ns2:test-14712977502231 has been successfully restored to ns2:table2_restore 2016-08-15 14:50:34,444 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Listener(912): RpcServer.listener,port=55757: DISCONNECTING client 10.22.9.171:56183 because read count=-1. Number of active connections: 7 2016-08-15 14:50:34,444 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56181 because read count=-1. Number of active connections: 11 2016-08-15 14:50:34,444 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56182 because read count=-1. Number of active connections: 11 2016-08-15 14:50:34,444 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (1905415199) to /10.22.9.171:55757 from tyu: closed 2016-08-15 14:50:34,444 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-15 14:50:34,444 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471297762157 hdfs://localhost:55740/backupUT/backup_1471297762157/ns2/test-14712977502231/ 2016-08-15 14:50:34,444 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel$8(566): IPC Client (1802906910) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:34,444 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (-1438597765) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:34,444 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-15 14:50:34,445 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:55740/backupUT/backup_1471297762157/ns3/test-14712977502232/.backup.manifest 2016-08-15 14:50:34,448 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471297762157 2016-08-15 14:50:34,448 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471297762157/ns3/test-14712977502232/.backup.manifest 2016-08-15 14:50:34,448 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns3:test-14712977502232' to 'ns3:table3_restore' from full backup image hdfs://localhost:55740/backupUT/backup_1471297762157/ns3/test-14712977502232 2016-08-15 14:50:34,453 DEBUG [main] util.RestoreServerUtil(109): Folder tableArchivePath: hdfs://localhost:55740/backupUT/backup_1471297762157/ns3/test-14712977502232/archive/data/ns3/test-14712977502232 does not exists 2016-08-15 14:50:34,453 DEBUG [main] util.RestoreServerUtil(315): find table descriptor but no archive dir for table ns3:test-14712977502232, will only create table 2016-08-15 14:50:34,454 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5ea2e094 connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:34,456 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x5ea2e0940x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:34,456 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c48d2c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:50:34,456 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:50:34,456 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:50:34,457 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x5ea2e094-0x156902d8a14001b connected 2016-08-15 14:50:34,458 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:34,458 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56187; # active connections: 10 2016-08-15 14:50:34,461 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:34,461 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56187 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:34,462 INFO [main] util.RestoreServerUtil(596): Creating target table 'ns3:table3_restore' 2016-08-15 14:50:34,463 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:50:34,463 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56188; # active connections: 11 2016-08-15 14:50:34,464 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:34,464 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56188 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:34,465 INFO [B.defaultRpcServer.handler=3,queue=0,port=55755] master.HMaster(1495): Client=tyu//10.22.9.171 create 'ns3:table3_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-15 14:50:34,570 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns3:table3_restore) id=17 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-15 14:50:34,572 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-15 14:50:34,573 DEBUG [ProcedureExecutor-1] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns3:table3_restore/write-master:557550000000000 2016-08-15 14:50:34,675 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-15 14:50:34,688 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741916_1092{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 291 2016-08-15 14:50:34,881 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-15 14:50:35,095 DEBUG [ProcedureExecutor-1] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns3/table3_restore/.tabledesc/.tableinfo.0000000001 2016-08-15 14:50:35,097 INFO [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(6162): creating HRegion ns3:table3_restore HTD == 'ns3:table3_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp Table name == ns3:table3_restore 2016-08-15 14:50:35,110 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741917_1093{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 45 2016-08-15 14:50:35,188 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-15 14:50:35,517 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(736): Instantiated ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:50:35,517 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1419): Closing ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3.: disabling compactions & flushes 2016-08-15 14:50:35,517 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1446): Updates disabled for region ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:50:35,517 INFO [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1552): Closed ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:50:35,626 DEBUG [ProcedureExecutor-1] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3."} 2016-08-15 14:50:35,627 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:35,628 INFO [ProcedureExecutor-1] hbase.MetaTableAccessor(1571): Added 1 2016-08-15 14:50:35,693 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-15 14:50:35,736 INFO [ProcedureExecutor-1] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,55757,1471297725443 2016-08-15 14:50:35,737 ERROR [ProcedureExecutor-1] master.TableStateManager(134): Unable to get table ns3:table3_restore state org.apache.hadoop.hbase.TableNotFoundException: ns3:table3_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-15 14:50:35,737 INFO [ProcedureExecutor-1] master.RegionStates(1106): Transition {1945f514e609ff061d2c4aee1cdb82e3 state=OFFLINE, ts=1471297835736, server=null} to {1945f514e609ff061d2c4aee1cdb82e3 state=PENDING_OPEN, ts=1471297835737, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:50:35,737 INFO [ProcedureExecutor-1] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. with state=PENDING_OPEN, sn=10.22.9.171,55757,1471297725443 2016-08-15 14:50:35,738 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:35,739 INFO [PriorityRpcServer.handler=1,queue=1,port=55757] regionserver.RSRpcServices(1666): Open ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:50:35,744 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-1] regionserver.HRegion(6339): Opening region: {ENCODED => 1945f514e609ff061d2c4aee1cdb82e3, NAME => 'ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3.', STARTKEY => '', ENDKEY => ''} 2016-08-15 14:50:35,744 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-1] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table3_restore 1945f514e609ff061d2c4aee1cdb82e3 2016-08-15 14:50:35,745 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-1] regionserver.HRegion(736): Instantiated ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:50:35,748 INFO [StoreOpener-1945f514e609ff061d2c4aee1cdb82e3-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1087208, freeSize=1042875096, maxSize=1043962304, heapSize=1087208, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-15 14:50:35,748 INFO [StoreOpener-1945f514e609ff061d2c4aee1cdb82e3-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-15 14:50:35,749 DEBUG [StoreOpener-1945f514e609ff061d2c4aee1cdb82e3-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns3/table3_restore/1945f514e609ff061d2c4aee1cdb82e3/f 2016-08-15 14:50:35,749 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-1] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns3/table3_restore/1945f514e609ff061d2c4aee1cdb82e3 2016-08-15 14:50:35,754 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns3/table3_restore/1945f514e609ff061d2c4aee1cdb82e3/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-15 14:50:35,754 INFO [RS_OPEN_REGION-10.22.9.171:55757-1] regionserver.HRegion(871): Onlined 1945f514e609ff061d2c4aee1cdb82e3; next sequenceid=2 2016-08-15 14:50:35,755 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297811122 2016-08-15 14:50:35,756 INFO [PostOpenDeployTasks:1945f514e609ff061d2c4aee1cdb82e3] regionserver.HRegionServer(1952): Post open deploy tasks for ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:50:35,757 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] master.AssignmentManager(2884): Got transition OPENED for {1945f514e609ff061d2c4aee1cdb82e3 state=PENDING_OPEN, ts=1471297835737, server=10.22.9.171,55757,1471297725443} from 10.22.9.171,55757,1471297725443 2016-08-15 14:50:35,757 INFO [B.defaultRpcServer.handler=2,queue=0,port=55755] master.RegionStates(1106): Transition {1945f514e609ff061d2c4aee1cdb82e3 state=PENDING_OPEN, ts=1471297835737, server=10.22.9.171,55757,1471297725443} to {1945f514e609ff061d2c4aee1cdb82e3 state=OPEN, ts=1471297835757, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:50:35,757 INFO [B.defaultRpcServer.handler=2,queue=0,port=55755] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. with state=OPEN, openSeqNum=2, server=10.22.9.171,55757,1471297725443 2016-08-15 14:50:35,757 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:35,758 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] master.RegionStates(452): Onlined 1945f514e609ff061d2c4aee1cdb82e3 on 10.22.9.171,55757,1471297725443 2016-08-15 14:50:35,758 DEBUG [ProcedureExecutor-1] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,55757,1471297725443 2016-08-15 14:50:35,758 DEBUG [ProcedureExecutor-1] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471297835758,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:table3_restore"} 2016-08-15 14:50:35,758 ERROR [B.defaultRpcServer.handler=2,queue=0,port=55755] master.TableStateManager(134): Unable to get table ns3:table3_restore state org.apache.hadoop.hbase.TableNotFoundException: ns3:table3_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-15 14:50:35,759 DEBUG [PostOpenDeployTasks:1945f514e609ff061d2c4aee1cdb82e3] regionserver.HRegionServer(1979): Finished post open deploy task for ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:50:35,762 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-1] handler.OpenRegionHandler(126): Opened ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. on 10.22.9.171,55757,1471297725443 2016-08-15 14:50:35,762 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:35,762 INFO [ProcedureExecutor-1] hbase.MetaTableAccessor(1700): Updated table ns3:table3_restore state to ENABLED in META 2016-08-15 14:50:36,085 DEBUG [ProcedureExecutor-1] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns3:table3_restore/write-master:557550000000000 2016-08-15 14:50:36,085 DEBUG [ProcedureExecutor-1] procedure2.ProcedureExecutor(870): Procedure completed in 1.5110sec: CreateTableProcedure (table=ns3:table3_restore) id=17 owner=tyu state=FINISHED 2016-08-15 14:50:36,108 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-15 14:50:36,698 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-15 14:50:36,699 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns3:table3_restore completed 2016-08-15 14:50:36,699 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:50:36,699 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a14001b 2016-08-15 14:50:36,702 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:50:36,704 INFO [main] impl.RestoreClientImpl(292): ns3:test-14712977502232 has been successfully restored to ns3:table3_restore 2016-08-15 14:50:36,704 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-15 14:50:36,704 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471297762157 hdfs://localhost:55740/backupUT/backup_1471297762157/ns3/test-14712977502232/ 2016-08-15 14:50:36,704 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56187 because read count=-1. Number of active connections: 11 2016-08-15 14:50:36,704 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel$8(566): IPC Client (1474391423) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:36,704 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56188 because read count=-1. Number of active connections: 11 2016-08-15 14:50:36,704 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (707589641) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:36,704 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-15 14:50:36,706 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:55740/backupUT/backup_1471297762157/ns4/test-14712977502233/.backup.manifest 2016-08-15 14:50:36,709 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471297762157 2016-08-15 14:50:36,709 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471297762157/ns4/test-14712977502233/.backup.manifest 2016-08-15 14:50:36,709 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns4:test-14712977502233' to 'ns4:table4_restore' from full backup image hdfs://localhost:55740/backupUT/backup_1471297762157/ns4/test-14712977502233 2016-08-15 14:50:36,716 DEBUG [main] util.RestoreServerUtil(109): Folder tableArchivePath: hdfs://localhost:55740/backupUT/backup_1471297762157/ns4/test-14712977502233/archive/data/ns4/test-14712977502233 does not exists 2016-08-15 14:50:36,716 DEBUG [main] util.RestoreServerUtil(315): find table descriptor but no archive dir for table ns4:test-14712977502233, will only create table 2016-08-15 14:50:36,716 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5517ebd connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:36,719 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x5517ebd0x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:36,719 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@708aa744, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:50:36,720 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:50:36,720 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:50:36,720 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x5517ebd-0x156902d8a14001c connected 2016-08-15 14:50:36,722 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:36,722 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56193; # active connections: 10 2016-08-15 14:50:36,723 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:36,723 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56193 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:36,724 INFO [main] util.RestoreServerUtil(596): Creating target table 'ns4:table4_restore' 2016-08-15 14:50:36,725 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:50:36,725 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56194; # active connections: 11 2016-08-15 14:50:36,726 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:36,726 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56194 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:36,728 INFO [B.defaultRpcServer.handler=2,queue=0,port=55755] master.HMaster(1495): Client=tyu//10.22.9.171 create 'ns4:table4_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-15 14:50:36,832 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns4:table4_restore) id=18 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-15 14:50:36,836 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-15 14:50:36,837 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns4:table4_restore/write-master:557550000000000 2016-08-15 14:50:36,940 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-15 14:50:36,953 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741918_1094{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 291 2016-08-15 14:50:37,143 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-15 14:50:37,362 DEBUG [ProcedureExecutor-0] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns4/table4_restore/.tabledesc/.tableinfo.0000000001 2016-08-15 14:50:37,363 INFO [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(6162): creating HRegion ns4:table4_restore HTD == 'ns4:table4_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp Table name == ns4:table4_restore 2016-08-15 14:50:37,373 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741919_1095{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 45 2016-08-15 14:50:37,450 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-15 14:50:37,776 DEBUG [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(736): Instantiated ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd. 2016-08-15 14:50:37,777 DEBUG [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(1419): Closing ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd.: disabling compactions & flushes 2016-08-15 14:50:37,777 DEBUG [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(1446): Updates disabled for region ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd. 2016-08-15 14:50:37,777 INFO [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(1552): Closed ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd. 2016-08-15 14:50:37,890 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd."} 2016-08-15 14:50:37,892 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:37,893 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1571): Added 1 2016-08-15 14:50:37,954 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-15 14:50:37,997 INFO [ProcedureExecutor-0] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,55757,1471297725443 2016-08-15 14:50:37,998 ERROR [ProcedureExecutor-0] master.TableStateManager(134): Unable to get table ns4:table4_restore state org.apache.hadoop.hbase.TableNotFoundException: ns4:table4_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-15 14:50:37,998 INFO [ProcedureExecutor-0] master.RegionStates(1106): Transition {6cdb399964f82b5b2b7ceb6977686dfd state=OFFLINE, ts=1471297837997, server=null} to {6cdb399964f82b5b2b7ceb6977686dfd state=PENDING_OPEN, ts=1471297837998, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:50:37,999 INFO [ProcedureExecutor-0] master.RegionStateStore(207): Updating hbase:meta row ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd. with state=PENDING_OPEN, sn=10.22.9.171,55757,1471297725443 2016-08-15 14:50:37,999 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:38,001 INFO [PriorityRpcServer.handler=4,queue=0,port=55757] regionserver.RSRpcServices(1666): Open ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd. 2016-08-15 14:50:38,005 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] regionserver.HRegion(6339): Opening region: {ENCODED => 6cdb399964f82b5b2b7ceb6977686dfd, NAME => 'ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd.', STARTKEY => '', ENDKEY => ''} 2016-08-15 14:50:38,006 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table4_restore 6cdb399964f82b5b2b7ceb6977686dfd 2016-08-15 14:50:38,006 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] regionserver.HRegion(736): Instantiated ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd. 2016-08-15 14:50:38,009 INFO [StoreOpener-6cdb399964f82b5b2b7ceb6977686dfd-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1087208, freeSize=1042875096, maxSize=1043962304, heapSize=1087208, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-15 14:50:38,009 INFO [StoreOpener-6cdb399964f82b5b2b7ceb6977686dfd-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-15 14:50:38,010 DEBUG [StoreOpener-6cdb399964f82b5b2b7ceb6977686dfd-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns4/table4_restore/6cdb399964f82b5b2b7ceb6977686dfd/f 2016-08-15 14:50:38,011 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns4/table4_restore/6cdb399964f82b5b2b7ceb6977686dfd 2016-08-15 14:50:38,015 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns4/table4_restore/6cdb399964f82b5b2b7ceb6977686dfd/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-15 14:50:38,016 INFO [RS_OPEN_REGION-10.22.9.171:55757-2] regionserver.HRegion(871): Onlined 6cdb399964f82b5b2b7ceb6977686dfd; next sequenceid=2 2016-08-15 14:50:38,016 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297812387 2016-08-15 14:50:38,017 INFO [PostOpenDeployTasks:6cdb399964f82b5b2b7ceb6977686dfd] regionserver.HRegionServer(1952): Post open deploy tasks for ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd. 2016-08-15 14:50:38,017 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.AssignmentManager(2884): Got transition OPENED for {6cdb399964f82b5b2b7ceb6977686dfd state=PENDING_OPEN, ts=1471297837998, server=10.22.9.171,55757,1471297725443} from 10.22.9.171,55757,1471297725443 2016-08-15 14:50:38,017 INFO [B.defaultRpcServer.handler=4,queue=0,port=55755] master.RegionStates(1106): Transition {6cdb399964f82b5b2b7ceb6977686dfd state=PENDING_OPEN, ts=1471297837998, server=10.22.9.171,55757,1471297725443} to {6cdb399964f82b5b2b7ceb6977686dfd state=OPEN, ts=1471297838017, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:50:38,018 INFO [B.defaultRpcServer.handler=4,queue=0,port=55755] master.RegionStateStore(207): Updating hbase:meta row ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd. with state=OPEN, openSeqNum=2, server=10.22.9.171,55757,1471297725443 2016-08-15 14:50:38,018 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:38,019 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.RegionStates(452): Onlined 6cdb399964f82b5b2b7ceb6977686dfd on 10.22.9.171,55757,1471297725443 2016-08-15 14:50:38,019 DEBUG [ProcedureExecutor-0] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,55757,1471297725443 2016-08-15 14:50:38,019 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471297838019,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns4:table4_restore"} 2016-08-15 14:50:38,019 ERROR [B.defaultRpcServer.handler=4,queue=0,port=55755] master.TableStateManager(134): Unable to get table ns4:table4_restore state org.apache.hadoop.hbase.TableNotFoundException: ns4:table4_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-15 14:50:38,023 DEBUG [PostOpenDeployTasks:6cdb399964f82b5b2b7ceb6977686dfd] regionserver.HRegionServer(1979): Finished post open deploy task for ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd. 2016-08-15 14:50:38,023 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:38,023 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] handler.OpenRegionHandler(126): Opened ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd. on 10.22.9.171,55757,1471297725443 2016-08-15 14:50:38,024 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1700): Updated table ns4:table4_restore state to ENABLED in META 2016-08-15 14:50:38,346 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns4:table4_restore/write-master:557550000000000 2016-08-15 14:50:38,346 DEBUG [ProcedureExecutor-0] procedure2.ProcedureExecutor(870): Procedure completed in 1.5130sec: CreateTableProcedure (table=ns4:table4_restore) id=18 owner=tyu state=FINISHED 2016-08-15 14:50:38,959 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-15 14:50:38,959 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns4:table4_restore completed 2016-08-15 14:50:38,960 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:50:38,960 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a14001c 2016-08-15 14:50:38,963 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:50:38,964 INFO [main] impl.RestoreClientImpl(292): ns4:test-14712977502233 has been successfully restored to ns4:table4_restore 2016-08-15 14:50:38,964 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-15 14:50:38,964 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel$8(566): IPC Client (309800776) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:38,965 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471297762157 hdfs://localhost:55740/backupUT/backup_1471297762157/ns4/test-14712977502233/ 2016-08-15 14:50:38,965 DEBUG [main] impl.RestoreClientImpl(234): restoreStage finished 2016-08-15 14:50:38,964 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56193 because read count=-1. Number of active connections: 11 2016-08-15 14:50:38,964 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56194 because read count=-1. Number of active connections: 11 2016-08-15 14:50:38,964 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel$8(566): IPC Client (866602379) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:38,965 INFO [main] impl.RestoreClientImpl(108): Restore for [ns1:test-1471297750223, ns2:test-14712977502231, ns3:test-14712977502232, ns4:test-14712977502233] are successful! 2016-08-15 14:50:39,014 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:55740/backupUT/backup_1471297810954/ns1/test-1471297750223/.backup.manifest 2016-08-15 14:50:39,017 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471297810954 2016-08-15 14:50:39,018 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471297810954/ns1/test-1471297750223/.backup.manifest 2016-08-15 14:50:39,018 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:55740/backupUT/backup_1471297810954/ns2/test-14712977502231/.backup.manifest 2016-08-15 14:50:39,021 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471297810954 2016-08-15 14:50:39,021 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471297810954/ns2/test-14712977502231/.backup.manifest 2016-08-15 14:50:39,022 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:55740/backupUT/backup_1471297810954/ns3/test-14712977502232/.backup.manifest 2016-08-15 14:50:39,026 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471297810954 2016-08-15 14:50:39,026 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471297810954/ns3/test-14712977502232/.backup.manifest 2016-08-15 14:50:39,026 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x22c0d812 connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:39,029 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x22c0d8120x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:39,030 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6165168a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:50:39,030 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:50:39,030 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:50:39,031 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x22c0d812-0x156902d8a14001d connected 2016-08-15 14:50:39,032 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:39,032 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56202; # active connections: 10 2016-08-15 14:50:39,033 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:39,033 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56202 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:39,041 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a14001d 2016-08-15 14:50:39,041 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:50:39,042 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-15 14:50:39,042 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56202 because read count=-1. Number of active connections: 10 2016-08-15 14:50:39,042 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (1397234311) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:39,043 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:55740/backupUT/backup_1471297762157/ns1/test-1471297750223/.backup.manifest 2016-08-15 14:50:39,046 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471297762157 2016-08-15 14:50:39,046 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471297762157/ns1/test-1471297750223/.backup.manifest 2016-08-15 14:50:39,046 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns1:test-1471297750223' to 'ns1:table1_restore' from full backup image hdfs://localhost:55740/backupUT/backup_1471297762157/ns1/test-1471297750223 2016-08-15 14:50:39,054 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7ea90bbc connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:39,057 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x7ea90bbc0x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:39,058 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1bdaed4e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:50:39,058 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:50:39,058 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:50:39,059 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x7ea90bbc-0x156902d8a14001e connected 2016-08-15 14:50:39,060 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:39,060 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56206; # active connections: 10 2016-08-15 14:50:39,061 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:39,061 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56206 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:39,062 INFO [main] util.RestoreServerUtil(585): Truncating exising target table 'ns1:table1_restore', preserving region splits 2016-08-15 14:50:39,065 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:50:39,065 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56207; # active connections: 11 2016-08-15 14:50:39,066 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:39,066 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56207 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:39,066 INFO [main] client.HBaseAdmin$10(780): Started disable of ns1:table1_restore 2016-08-15 14:50:39,070 INFO [B.defaultRpcServer.handler=4,queue=0,port=55755] master.HMaster(1986): Client=tyu//10.22.9.171 disable ns1:table1_restore 2016-08-15 14:50:39,184 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] procedure2.ProcedureExecutor(669): Procedure DisableTableProcedure (table=ns1:table1_restore) id=19 owner=tyu state=RUNNABLE:DISABLE_TABLE_PREPARE added to the store. 2016-08-15 14:50:39,187 DEBUG [ProcedureExecutor-2] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns1:table1_restore/write-master:557550000000001 2016-08-15 14:50:39,189 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-15 14:50:39,291 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-15 14:50:39,397 DEBUG [ProcedureExecutor-2] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471297839397,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:table1_restore"} 2016-08-15 14:50:39,399 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:39,400 INFO [ProcedureExecutor-2] hbase.MetaTableAccessor(1700): Updated table ns1:table1_restore state to DISABLING in META 2016-08-15 14:50:39,499 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-15 14:50:39,507 INFO [ProcedureExecutor-2] procedure.DisableTableProcedure(395): Offlining 1 regions. 2016-08-15 14:50:39,512 DEBUG [10.22.9.171,55755,1471297724766-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(1352): Starting unassign of ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. (offlining), current state: {1a2af1efddb74842cc0d4b4b051d5478 state=OPEN, ts=1471297830617, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:50:39,512 INFO [10.22.9.171,55755,1471297724766-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStates(1106): Transition {1a2af1efddb74842cc0d4b4b051d5478 state=OPEN, ts=1471297830617, server=10.22.9.171,55757,1471297725443} to {1a2af1efddb74842cc0d4b4b051d5478 state=PENDING_CLOSE, ts=1471297839512, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:50:39,512 INFO [10.22.9.171,55755,1471297724766-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. with state=PENDING_CLOSE 2016-08-15 14:50:39,512 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:39,516 INFO [PriorityRpcServer.handler=0,queue=0,port=55757] regionserver.RSRpcServices(1314): Close 1a2af1efddb74842cc0d4b4b051d5478, moving to null 2016-08-15 14:50:39,517 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-0] handler.CloseRegionHandler(90): Processing close of ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:39,517 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-0] regionserver.HRegion(1419): Closing ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478.: disabling compactions & flushes 2016-08-15 14:50:39,517 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-0] regionserver.HRegion(1446): Updates disabled for region ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:39,519 INFO [StoreCloserThread-ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478.-1] regionserver.HStore(839): Closed f 2016-08-15 14:50:39,519 DEBUG [10.22.9.171,55755,1471297724766-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(930): Sent CLOSE to 10.22.9.171,55757,1471297725443 for region ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:39,519 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297811964 2016-08-15 14:50:39,524 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/recovered.edits/6.seqid to file, newSeqId=6, maxSeqId=2 2016-08-15 14:50:39,527 INFO [RS_CLOSE_REGION-10.22.9.171:55757-0] regionserver.HRegion(1552): Closed ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:39,528 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] master.AssignmentManager(2884): Got transition CLOSED for {1a2af1efddb74842cc0d4b4b051d5478 state=PENDING_CLOSE, ts=1471297839512, server=10.22.9.171,55757,1471297725443} from 10.22.9.171,55757,1471297725443 2016-08-15 14:50:39,529 INFO [B.defaultRpcServer.handler=2,queue=0,port=55755] master.RegionStates(1106): Transition {1a2af1efddb74842cc0d4b4b051d5478 state=PENDING_CLOSE, ts=1471297839512, server=10.22.9.171,55757,1471297725443} to {1a2af1efddb74842cc0d4b4b051d5478 state=OFFLINE, ts=1471297839529, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:50:39,529 INFO [B.defaultRpcServer.handler=2,queue=0,port=55755] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. with state=OFFLINE 2016-08-15 14:50:39,529 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:39,530 INFO [B.defaultRpcServer.handler=2,queue=0,port=55755] master.RegionStates(590): Offlined 1a2af1efddb74842cc0d4b4b051d5478 from 10.22.9.171,55757,1471297725443 2016-08-15 14:50:39,530 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-0] handler.CloseRegionHandler(122): Closed ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:39,670 DEBUG [ProcedureExecutor-2] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471297839669,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:table1_restore"} 2016-08-15 14:50:39,671 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:39,671 INFO [ProcedureExecutor-2] hbase.MetaTableAccessor(1700): Updated table ns1:table1_restore state to DISABLED in META 2016-08-15 14:50:39,671 INFO [ProcedureExecutor-2] procedure.DisableTableProcedure(424): Disabled table, ns1:table1_restore, is completed. 2016-08-15 14:50:39,804 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-15 14:50:39,882 DEBUG [ProcedureExecutor-2] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns1:table1_restore/write-master:557550000000001 2016-08-15 14:50:39,883 DEBUG [ProcedureExecutor-2] procedure2.ProcedureExecutor(870): Procedure completed in 704msec: DisableTableProcedure (table=ns1:table1_restore) id=19 owner=tyu state=FINISHED 2016-08-15 14:50:40,308 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-15 14:50:40,308 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: DISABLE, Table Name: ns1:table1_restore completed 2016-08-15 14:50:40,310 INFO [main] client.HBaseAdmin$8(615): Started truncating ns1:table1_restore 2016-08-15 14:50:40,314 INFO [B.defaultRpcServer.handler=3,queue=0,port=55755] master.HMaster(1848): Client=tyu//10.22.9.171 truncate ns1:table1_restore 2016-08-15 14:50:40,421 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] procedure2.ProcedureExecutor(669): Procedure TruncateTableProcedure (table=ns1:table1_restore preserveSplits=true) id=20 owner=tyu state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION added to the store. 2016-08-15 14:50:40,423 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns1:table1_restore/write-master:557550000000002 2016-08-15 14:50:40,425 DEBUG [ProcedureExecutor-3] procedure.TruncateTableProcedure(87): waiting for 'ns1:table1_restore' regions in transition 2016-08-15 14:50:40,531 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"info":[{"timestamp":1471297840531,"tag":[],"qualifier":"","vlen":0}]},"row":"ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478."} 2016-08-15 14:50:40,532 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:40,533 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1854): Deleted [{ENCODED => 1a2af1efddb74842cc0d4b4b051d5478, NAME => 'ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478.', STARTKEY => '', ENDKEY => ''}] 2016-08-15 14:50:40,535 DEBUG [ProcedureExecutor-3] procedure.DeleteTableProcedure(408): Removing 'ns1:table1_restore' from region states. 2016-08-15 14:50:40,536 DEBUG [ProcedureExecutor-3] procedure.DeleteTableProcedure(412): Marking 'ns1:table1_restore' as deleted. 2016-08-15 14:50:40,536 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"table":[{"timestamp":1471297840536,"tag":[],"qualifier":"state","vlen":0}]},"row":"ns1:table1_restore"} 2016-08-15 14:50:40,537 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:40,538 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1726): Deleted table ns1:table1_restore state from META 2016-08-15 14:50:40,650 DEBUG [ProcedureExecutor-3] procedure.DeleteTableProcedure(340): Archiving region ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. from FS 2016-08-15 14:50:40,654 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(93): ARCHIVING hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478 2016-08-15 14:50:40,658 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(134): Archiving [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/f, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/recovered.edits] 2016-08-15 14:50:40,667 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/f/15ef6fbfedaf4639b3ee9276dae41731_SeqId_4_, to hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/archive/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/f/15ef6fbfedaf4639b3ee9276dae41731_SeqId_4_ 2016-08-15 14:50:40,672 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/recovered.edits/6.seqid, to hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/archive/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/recovered.edits/6.seqid 2016-08-15 14:50:40,673 INFO [IPC Server handler 0 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741911_1087 127.0.0.1:55741 2016-08-15 14:50:40,673 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(453): Deleted all region files in: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478 2016-08-15 14:50:40,673 DEBUG [ProcedureExecutor-3] procedure.DeleteTableProcedure(344): Table 'ns1:table1_restore' archived! 2016-08-15 14:50:40,675 INFO [IPC Server handler 6 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741910_1086 127.0.0.1:55741 2016-08-15 14:50:40,788 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741920_1096{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 290 2016-08-15 14:50:41,198 DEBUG [ProcedureExecutor-3] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns1/table1_restore/.tabledesc/.tableinfo.0000000001 2016-08-15 14:50:41,201 INFO [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(6162): creating HRegion ns1:table1_restore HTD == 'ns1:table1_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp Table name == ns1:table1_restore 2016-08-15 14:50:41,212 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741921_1097{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 45 2016-08-15 14:50:41,259 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-15 14:50:41,618 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(736): Instantiated ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:41,619 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1419): Closing ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478.: disabling compactions & flushes 2016-08-15 14:50:41,619 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1446): Updates disabled for region ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:41,620 INFO [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1552): Closed ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:41,732 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478."} 2016-08-15 14:50:41,733 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:41,734 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1571): Added 1 2016-08-15 14:50:41,839 INFO [ProcedureExecutor-3] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,55757,1471297725443 2016-08-15 14:50:41,840 ERROR [ProcedureExecutor-3] master.TableStateManager(134): Unable to get table ns1:table1_restore state org.apache.hadoop.hbase.TableNotFoundException: ns1:table1_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:122) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:47) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-15 14:50:41,841 INFO [ProcedureExecutor-3] master.RegionStates(1106): Transition {1a2af1efddb74842cc0d4b4b051d5478 state=OFFLINE, ts=1471297841839, server=null} to {1a2af1efddb74842cc0d4b4b051d5478 state=PENDING_OPEN, ts=1471297841841, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:50:41,841 INFO [ProcedureExecutor-3] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. with state=PENDING_OPEN, sn=10.22.9.171,55757,1471297725443 2016-08-15 14:50:41,841 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:41,843 INFO [PriorityRpcServer.handler=3,queue=1,port=55757] regionserver.RSRpcServices(1666): Open ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:41,848 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-0] regionserver.HRegion(6339): Opening region: {ENCODED => 1a2af1efddb74842cc0d4b4b051d5478, NAME => 'ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478.', STARTKEY => '', ENDKEY => ''} 2016-08-15 14:50:41,848 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table1_restore 1a2af1efddb74842cc0d4b4b051d5478 2016-08-15 14:50:41,849 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-0] regionserver.HRegion(736): Instantiated ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:41,852 INFO [StoreOpener-1a2af1efddb74842cc0d4b4b051d5478-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1102696, freeSize=1042859608, maxSize=1043962304, heapSize=1102696, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-15 14:50:41,852 INFO [StoreOpener-1a2af1efddb74842cc0d4b4b051d5478-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-15 14:50:41,853 DEBUG [StoreOpener-1a2af1efddb74842cc0d4b4b051d5478-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/f 2016-08-15 14:50:41,854 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478 2016-08-15 14:50:41,859 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-15 14:50:41,859 INFO [RS_OPEN_REGION-10.22.9.171:55757-0] regionserver.HRegion(871): Onlined 1a2af1efddb74842cc0d4b4b051d5478; next sequenceid=2 2016-08-15 14:50:41,859 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297811964 2016-08-15 14:50:41,860 INFO [PostOpenDeployTasks:1a2af1efddb74842cc0d4b4b051d5478] regionserver.HRegionServer(1952): Post open deploy tasks for ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:41,860 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55755] master.AssignmentManager(2884): Got transition OPENED for {1a2af1efddb74842cc0d4b4b051d5478 state=PENDING_OPEN, ts=1471297841841, server=10.22.9.171,55757,1471297725443} from 10.22.9.171,55757,1471297725443 2016-08-15 14:50:41,860 INFO [B.defaultRpcServer.handler=0,queue=0,port=55755] master.RegionStates(1106): Transition {1a2af1efddb74842cc0d4b4b051d5478 state=PENDING_OPEN, ts=1471297841841, server=10.22.9.171,55757,1471297725443} to {1a2af1efddb74842cc0d4b4b051d5478 state=OPEN, ts=1471297841860, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:50:41,860 INFO [B.defaultRpcServer.handler=0,queue=0,port=55755] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. with state=OPEN, openSeqNum=2, server=10.22.9.171,55757,1471297725443 2016-08-15 14:50:41,861 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:41,862 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55755] master.RegionStates(452): Onlined 1a2af1efddb74842cc0d4b4b051d5478 on 10.22.9.171,55757,1471297725443 2016-08-15 14:50:41,862 DEBUG [ProcedureExecutor-3] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,55757,1471297725443 2016-08-15 14:50:41,862 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471297841862,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:table1_restore"} 2016-08-15 14:50:41,862 ERROR [B.defaultRpcServer.handler=0,queue=0,port=55755] master.TableStateManager(134): Unable to get table ns1:table1_restore state org.apache.hadoop.hbase.TableNotFoundException: ns1:table1_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-15 14:50:41,863 DEBUG [PostOpenDeployTasks:1a2af1efddb74842cc0d4b4b051d5478] regionserver.HRegionServer(1979): Finished post open deploy task for ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:41,863 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:50:41,863 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-0] handler.OpenRegionHandler(126): Opened ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. on 10.22.9.171,55757,1471297725443 2016-08-15 14:50:41,864 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1700): Updated table ns1:table1_restore state to ENABLED in META 2016-08-15 14:50:41,974 DEBUG [ProcedureExecutor-3] procedure.TruncateTableProcedure(129): truncate 'ns1:table1_restore' completed 2016-08-15 14:50:42,083 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns1:table1_restore/write-master:557550000000002 2016-08-15 14:50:42,084 DEBUG [ProcedureExecutor-3] procedure2.ProcedureExecutor(870): Procedure completed in 1.6600sec: TruncateTableProcedure (table=ns1:table1_restore preserveSplits=true) id=20 owner=tyu state=FINISHED 2016-08-15 14:50:42,198 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=20 2016-08-15 14:50:42,198 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: TRUNCATE, Table Name: ns1:table1_restore completed 2016-08-15 14:50:42,198 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:50:42,198 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a14001e 2016-08-15 14:50:42,201 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:50:42,202 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (-1397022600) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:42,202 DEBUG [main] util.RestoreServerUtil(255): cluster hold the backup image: hdfs://localhost:55740; local cluster node: hdfs://localhost:55740 2016-08-15 14:50:42,202 DEBUG [main] util.RestoreServerUtil(261): File hdfs://localhost:55740/backupUT/backup_1471297762157/ns1/test-1471297750223/archive/data/ns1/test-1471297750223 on local cluster, back it up before restore 2016-08-15 14:50:42,202 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56206 because read count=-1. Number of active connections: 11 2016-08-15 14:50:42,202 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel$8(566): IPC Client (1035227506) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:42,202 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56207 because read count=-1. Number of active connections: 11 2016-08-15 14:50:42,219 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741922_1098{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 12093 2016-08-15 14:50:42,626 DEBUG [main] util.RestoreServerUtil(271): Copied to temporary path on local cluster: /user/tyu/hbase-staging/restore 2016-08-15 14:50:42,628 DEBUG [main] util.RestoreServerUtil(355): TableArchivePath for bulkload using tempPath: /user/tyu/hbase-staging/restore 2016-08-15 14:50:42,647 DEBUG [main] util.RestoreServerUtil(363): Restoring HFiles from directory hdfs://localhost:55740/user/tyu/hbase-staging/restore/d0d5e63c01f66001cc1c60dbba147803 2016-08-15 14:50:42,647 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4d0ae432 connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:42,651 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x4d0ae4320x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:42,652 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@de243bc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:50:42,652 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:50:42,652 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x4d0ae432-0x156902d8a14001f connected 2016-08-15 14:50:42,652 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:50:42,655 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:42,655 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56212; # active connections: 10 2016-08-15 14:50:42,656 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:42,656 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56212 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:42,662 DEBUG [main] client.ConnectionImplementation(604): Table ns1:table1_restore should be available 2016-08-15 14:50:42,668 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:50:42,668 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56213; # active connections: 11 2016-08-15 14:50:42,669 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:42,669 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56213 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:42,674 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1102696, freeSize=1042859608, maxSize=1043962304, heapSize=1102696, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-15 14:50:42,678 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:55740/user/tyu/hbase-staging/restore/d0d5e63c01f66001cc1c60dbba147803/f/d877eabaa256430aadac750bc00ca29f first=row0 last=row99 2016-08-15 14:50:42,682 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478., hostname=10.22.9.171,55757,1471297725443, seqNum=2 for row with hfile group [{[B@75c0727b,hdfs://localhost:55740/user/tyu/hbase-staging/restore/d0d5e63c01f66001cc1c60dbba147803/f/d877eabaa256430aadac750bc00ca29f}] 2016-08-15 14:50:42,683 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:42,683 DEBUG [RpcServer.listener,port=55757] ipc.RpcServer$Listener(880): RpcServer.listener,port=55757: connection from 10.22.9.171:56214; # active connections: 7 2016-08-15 14:50:42,684 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:42,684 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56214 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:42,684 INFO [B.defaultRpcServer.handler=3,queue=0,port=55757] regionserver.HStore(670): Validating hfile at hdfs://localhost:55740/user/tyu/hbase-staging/restore/d0d5e63c01f66001cc1c60dbba147803/f/d877eabaa256430aadac750bc00ca29f for inclusion in store f region ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:50:42,687 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55757] regionserver.HStore(682): HFile bounds: first=row0 last=row99 2016-08-15 14:50:42,687 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55757] regionserver.HStore(684): Region bounds: first= last= 2016-08-15 14:50:42,689 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55757] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:55740/user/tyu/hbase-staging/restore/d0d5e63c01f66001cc1c60dbba147803/f/d877eabaa256430aadac750bc00ca29f as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/f/8881bee57c6e483580e159b35babcbad_SeqId_4_ 2016-08-15 14:50:42,690 INFO [B.defaultRpcServer.handler=3,queue=0,port=55757] regionserver.HStore(742): Loaded HFile hdfs://localhost:55740/user/tyu/hbase-staging/restore/d0d5e63c01f66001cc1c60dbba147803/f/d877eabaa256430aadac750bc00ca29f into store 'f' as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/f/8881bee57c6e483580e159b35babcbad_SeqId_4_ - updating store file list. 2016-08-15 14:50:42,695 INFO [B.defaultRpcServer.handler=3,queue=0,port=55757] regionserver.HStore(777): Loaded HFile hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/f/8881bee57c6e483580e159b35babcbad_SeqId_4_ into store 'f 2016-08-15 14:50:42,696 INFO [B.defaultRpcServer.handler=3,queue=0,port=55757] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:55740/user/tyu/hbase-staging/restore/d0d5e63c01f66001cc1c60dbba147803/f/d877eabaa256430aadac750bc00ca29f into store f (new location: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/f/8881bee57c6e483580e159b35babcbad_SeqId_4_) 2016-08-15 14:50:42,696 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297811964 2016-08-15 14:50:42,697 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:50:42,697 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a14001f 2016-08-15 14:50:42,700 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:50:42,700 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel$8(566): IPC Client (322751287) to /10.22.9.171:55757 from tyu: closed 2016-08-15 14:50:42,701 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56212 because read count=-1. Number of active connections: 11 2016-08-15 14:50:42,701 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Listener(912): RpcServer.listener,port=55757: DISCONNECTING client 10.22.9.171:56214 because read count=-1. Number of active connections: 7 2016-08-15 14:50:42,701 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56213 because read count=-1. Number of active connections: 11 2016-08-15 14:50:42,700 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel$8(566): IPC Client (1398470144) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:42,700 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel$8(566): IPC Client (146638576) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:42,702 INFO [main] impl.RestoreClientImpl(284): Restoring 'ns1:test-1471297750223' to 'ns1:table1_restore' from log dirs: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs 2016-08-15 14:50:42,702 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x49100f94 connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:42,705 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x49100f940x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:42,706 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@452d8d08, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:50:42,706 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:50:42,706 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:50:42,706 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x49100f94-0x156902d8a140020 connected 2016-08-15 14:50:42,708 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:42,708 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56216; # active connections: 10 2016-08-15 14:50:42,708 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:42,709 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56216 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:42,714 INFO [main] mapreduce.MapReduceRestoreService(56): Restore incremental backup from directory hdfs://localhost:55740/backupUT/backup_1471297810954/WALs from hbase tables ,ns1:test-1471297750223 to tables ,ns1:table1_restore 2016-08-15 14:50:42,714 INFO [main] mapreduce.MapReduceRestoreService(61): Restore ns1:test-1471297750223 into ns1:table1_restore 2016-08-15 14:50:42,718 DEBUG [main] mapreduce.WALPlayer(299): add incremental job :/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471297842714 2016-08-15 14:50:42,720 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x29c70b12 connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:42,723 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x29c70b120x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:42,724 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@499f3122, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:50:42,724 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:50:42,724 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:50:42,725 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x29c70b12-0x156902d8a140021 connected 2016-08-15 14:50:42,727 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:50:42,727 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56218; # active connections: 11 2016-08-15 14:50:42,728 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:42,728 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56218 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:42,734 INFO [main] mapreduce.HFileOutputFormat2(478): bulkload locality sensitive enabled 2016-08-15 14:50:42,734 INFO [main] mapreduce.HFileOutputFormat2(483): Looking up current regions for table ns1:test-1471297750223 2016-08-15 14:50:42,737 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:42,737 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56219; # active connections: 12 2016-08-15 14:50:42,738 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:42,738 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56219 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:42,741 INFO [main] mapreduce.HFileOutputFormat2(485): Configuring 1 reduce partitions to match current region count 2016-08-15 14:50:42,741 INFO [main] mapreduce.HFileOutputFormat2(378): Writing partition information to /user/tyu/hbase-staging/partitions_defa4a4a-2cc9-4e62-9bb7-075763f4bf72 2016-08-15 14:50:42,756 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741923_1099{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 153 2016-08-15 14:50:43,167 WARN [main] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-15 14:50:43,376 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-435929772837673908.jar 2016-08-15 14:50:43,666 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@75965af8] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:55741 to delete [blk_1073741910_1086, blk_1073741911_1087] 2016-08-15 14:50:44,543 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-7030945684806498585.jar 2016-08-15 14:50:44,921 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-5499059393539923898.jar 2016-08-15 14:50:44,940 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-1155326079252362155.jar 2016-08-15 14:50:46,123 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-5771948040543728462.jar 2016-08-15 14:50:46,124 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-15 14:50:46,124 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-15 14:50:46,125 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-15 14:50:46,125 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-15 14:50:46,125 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-15 14:50:46,125 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-15 14:50:46,334 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-6026679476306740935.jar 2016-08-15 14:50:46,335 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-6026679476306740935.jar 2016-08-15 14:50:47,335 DEBUG [10.22.9.171,55757,1471297725443_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-15 14:50:47,358 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-15 14:50:47,518 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.WALInputFormat, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-2310854706754339657.jar 2016-08-15 14:50:47,519 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-6026679476306740935.jar 2016-08-15 14:50:47,519 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-6026679476306740935.jar 2016-08-15 14:50:47,520 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-2310854706754339657.jar 2016-08-15 14:50:47,520 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-15 14:50:47,521 INFO [main] mapreduce.HFileOutputFormat2(498): Incremental table ns1:test-1471297750223 output configured. 2016-08-15 14:50:47,521 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:50:47,521 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140021 2016-08-15 14:50:47,522 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:50:47,523 DEBUG [main] mapreduce.WALPlayer(316): success configuring load incremental job 2016-08-15 14:50:47,523 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56219 because read count=-1. Number of active connections: 12 2016-08-15 14:50:47,523 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel$8(566): IPC Client (480887380) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:47,523 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56218 because read count=-1. Number of active connections: 12 2016-08-15 14:50:47,523 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel$8(566): IPC Client (-82901983) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:47,523 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.base.Preconditions, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-15 14:50:47,668 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741924_1100{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 1556922 2016-08-15 14:50:47,697 INFO [10.22.9.171,55755,1471297724766_ChoreService_1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5acb607c connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:50:47,700 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x5acb607c0x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:50:47,702 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44fb60a0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:50:47,702 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:50:47,702 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:50:47,702 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(580): Has backup sessions from hbase:backup 2016-08-15 14:50:47,703 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x5acb607c-0x156902d8a140022 connected 2016-08-15 14:50:47,705 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:47,706 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56224; # active connections: 11 2016-08-15 14:50:47,706 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:47,706 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56224 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:47,710 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:50:47,710 DEBUG [RpcServer.listener,port=55757] ipc.RpcServer$Listener(880): RpcServer.listener,port=55757: connection from 10.22.9.171:56225; # active connections: 7 2016-08-15 14:50:47,710 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:50:47,711 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56225 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:50:47,713 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297729233 2016-08-15 14:50:47,714 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297729233 2016-08-15 14:50:47,714 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531 2016-08-15 14:50:47,715 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531 2016-08-15 14:50:47,715 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297762961 2016-08-15 14:50:47,716 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(80): Didn't find this log in hbase:backup, keeping: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297762961 2016-08-15 14:50:47,716 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297729233 2016-08-15 14:50:47,717 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297729233 2016-08-15 14:50:47,717 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531 2016-08-15 14:50:47,718 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531 2016-08-15 14:50:47,718 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297754595 2016-08-15 14:50:47,719 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297754595 2016-08-15 14:50:47,719 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297757231 2016-08-15 14:50:47,720 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297757231 2016-08-15 14:50:47,720 INFO [10.22.9.171,55755,1471297724766_ChoreService_1] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140022 2016-08-15 14:50:47,721 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:50:47,722 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56224 because read count=-1. Number of active connections: 11 2016-08-15 14:50:47,722 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel$8(566): IPC Client (744075076) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:50:47,722 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Listener(912): RpcServer.listener,port=55757: DISCONNECTING client 10.22.9.171:56225 because read count=-1. Number of active connections: 7 2016-08-15 14:50:47,722 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (1413709588) to /10.22.9.171:55757 from tyu: closed 2016-08-15 14:50:48,105 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741925_1101{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 4669607 2016-08-15 14:50:48,521 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741926_1102{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 533455 2016-08-15 14:50:48,938 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741927_1103{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 662656 2016-08-15 14:50:49,354 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741928_1104{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 112558 2016-08-15 14:50:49,775 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741929_1105{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 1351207 2016-08-15 14:50:49,866 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-15 14:50:50,203 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741930_1106{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 4515909 2016-08-15 14:50:50,622 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741931_1107{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 1475955 2016-08-15 14:50:51,037 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741932_1108{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:50:51,047 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741933_1109{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 662656 2016-08-15 14:50:51,477 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741934_1110{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 4515909 2016-08-15 14:50:51,895 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741935_1111{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 0 2016-08-15 14:50:51,906 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741936_1112{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 792964 2016-08-15 14:50:52,327 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741937_1113{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 1795932 2016-08-15 14:50:52,730 WARN [main] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-15 14:50:52,747 DEBUG [main] mapreduce.WALInputFormat(263): Scanning hdfs://localhost:55740/backupUT/backup_1471297810954/WALs for WAL files 2016-08-15 14:50:52,748 WARN [main] mapreduce.WALInputFormat(286): File hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/.backup.manifest does not appear to be an WAL file. Skipping... 2016-08-15 14:50:52,748 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531 2016-08-15 14:50:52,748 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297731200 2016-08-15 14:50:52,748 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531 2016-08-15 14:50:52,748 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297733730 2016-08-15 14:50:52,748 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297754595 2016-08-15 14:50:52,748 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297757231 2016-08-15 14:50:52,757 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741938_1114{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 0 2016-08-15 14:50:52,763 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741939_1115{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 0 2016-08-15 14:50:52,778 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741940_1116{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 134676 2016-08-15 14:50:53,401 WARN [ResourceManager Event Processor] capacity.LeafQueue(632): maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-15 14:50:53,401 WARN [ResourceManager Event Processor] capacity.LeafQueue(653): maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-15 14:50:53,514 DEBUG [10.22.9.171,55793,1471297733428_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-15 14:50:53,542 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0001_000001 (auth:SIMPLE) 2016-08-15 14:50:53,622 DEBUG [10.22.9.171,55789,1471297733379_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-15 14:50:53,831 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/meta/1588230740/info 2016-08-15 14:50:53,831 DEBUG [region-location-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/backup/3ca5bb17c6b62ed61d22875df8c133ea/meta 2016-08-15 14:50:53,831 DEBUG [region-location-2] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/namespace/606bab3f14856574a09bb943381ad7b3/info 2016-08-15 14:50:53,831 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/meta/1588230740/table 2016-08-15 14:50:53,832 DEBUG [region-location-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/backup/3ca5bb17c6b62ed61d22875df8c133ea/session 2016-08-15 14:50:58,707 INFO [Socket Reader #1 for port 55828] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0001_000001 (auth:SIMPLE) 2016-08-15 14:50:58,959 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741941_1117{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:51:00,961 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0001_000001 (auth:SIMPLE) 2016-08-15 14:51:00,961 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0001_000001 (auth:SIMPLE) 2016-08-15 14:51:01,819 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0001_000001 (auth:SIMPLE) 2016-08-15 14:51:01,822 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0001_000001 (auth:SIMPLE) 2016-08-15 14:51:02,832 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0001_000001 (auth:SIMPLE) 2016-08-15 14:51:03,841 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0001_000001 (auth:SIMPLE) 2016-08-15 14:51:06,113 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0001_000001 (auth:SIMPLE) 2016-08-15 14:51:06,142 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0001_01_000003 is : 143 2016-08-15 14:51:07,416 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0001_000001 (auth:SIMPLE) 2016-08-15 14:51:07,438 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0001_01_000005 is : 143 2016-08-15 14:51:07,740 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0001_000001 (auth:SIMPLE) 2016-08-15 14:51:07,762 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0001_01_000004 is : 143 2016-08-15 14:51:07,786 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0001_000001 (auth:SIMPLE) 2016-08-15 14:51:07,807 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0001_01_000002 is : 143 2016-08-15 14:51:08,389 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0001_000001 (auth:SIMPLE) 2016-08-15 14:51:08,402 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0001_01_000006 is : 143 2016-08-15 14:51:08,757 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0001_000001 (auth:SIMPLE) 2016-08-15 14:51:08,769 WARN [ContainersLauncher #3] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0001_01_000007 is : 143 2016-08-15 14:51:08,889 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0001_000001 (auth:SIMPLE) 2016-08-15 14:51:12,959 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56304; # active connections: 11 2016-08-15 14:51:13,321 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:51:13,321 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56304 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:51:13,524 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56304 because read count=-1. Number of active connections: 11 2016-08-15 14:51:14,148 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741943_1119{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:51:14,175 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0001_000001 (auth:SIMPLE) 2016-08-15 14:51:14,191 WARN [ContainersLauncher #3] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0001_01_000008 is : 143 2016-08-15 14:51:14,227 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741942_1118{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 15965 2016-08-15 14:51:14,236 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741944_1120{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:51:14,257 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741945_1121{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:51:14,277 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741946_1122{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 0 2016-08-15 14:51:15,306 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741938_1114 127.0.0.1:55741 2016-08-15 14:51:15,306 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741939_1115 127.0.0.1:55741 2016-08-15 14:51:15,307 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741940_1116 127.0.0.1:55741 2016-08-15 14:51:15,307 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741942_1118 127.0.0.1:55741 2016-08-15 14:51:15,307 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741941_1117 127.0.0.1:55741 2016-08-15 14:51:15,307 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741937_1113 127.0.0.1:55741 2016-08-15 14:51:15,308 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741932_1108 127.0.0.1:55741 2016-08-15 14:51:15,308 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741934_1110 127.0.0.1:55741 2016-08-15 14:51:15,308 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741927_1103 127.0.0.1:55741 2016-08-15 14:51:15,308 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741929_1105 127.0.0.1:55741 2016-08-15 14:51:15,308 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741930_1106 127.0.0.1:55741 2016-08-15 14:51:15,308 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741933_1109 127.0.0.1:55741 2016-08-15 14:51:15,309 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741925_1101 127.0.0.1:55741 2016-08-15 14:51:15,309 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741924_1100 127.0.0.1:55741 2016-08-15 14:51:15,309 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741931_1107 127.0.0.1:55741 2016-08-15 14:51:15,309 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741928_1104 127.0.0.1:55741 2016-08-15 14:51:15,309 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741935_1111 127.0.0.1:55741 2016-08-15 14:51:15,309 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741926_1102 127.0.0.1:55741 2016-08-15 14:51:15,310 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741936_1112 127.0.0.1:55741 2016-08-15 14:51:15,891 DEBUG [main] mapreduce.MapReduceRestoreService(78): Restoring HFiles from directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471297842714 2016-08-15 14:51:15,891 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x401213a9 connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:51:15,896 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x401213a90x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:51:15,897 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@70e5756c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:51:15,897 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:51:15,897 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:51:15,897 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x401213a9-0x156902d8a140024 connected 2016-08-15 14:51:15,899 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:51:15,899 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56312; # active connections: 11 2016-08-15 14:51:15,900 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:51:15,900 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56312 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:51:15,907 DEBUG [main] client.ConnectionImplementation(604): Table ns1:table1_restore should be available 2016-08-15 14:51:15,909 WARN [main] mapreduce.LoadIncrementalHFiles(199): Skipping non-directory hdfs://localhost:55740/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471297842714/_SUCCESS 2016-08-15 14:51:15,915 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:51:15,915 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56314; # active connections: 12 2016-08-15 14:51:15,916 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:51:15,916 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56314 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:51:15,921 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1102696, freeSize=1042859608, maxSize=1043962304, heapSize=1102696, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-15 14:51:15,925 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:55740/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471297842714/f/1fd6c20ddea84f249d04a44ba5d03517 first=row0 last=row99 2016-08-15 14:51:15,928 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478., hostname=10.22.9.171,55757,1471297725443, seqNum=2 for row with hfile group [{[B@78665b34,hdfs://localhost:55740/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471297842714/f/1fd6c20ddea84f249d04a44ba5d03517}] 2016-08-15 14:51:15,930 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:51:15,930 DEBUG [RpcServer.listener,port=55757] ipc.RpcServer$Listener(880): RpcServer.listener,port=55757: connection from 10.22.9.171:56315; # active connections: 7 2016-08-15 14:51:15,930 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:51:15,931 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56315 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:51:15,931 INFO [B.defaultRpcServer.handler=2,queue=0,port=55757] regionserver.HStore(670): Validating hfile at hdfs://localhost:55740/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471297842714/f/1fd6c20ddea84f249d04a44ba5d03517 for inclusion in store f region ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:51:15,936 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55757] regionserver.HStore(682): HFile bounds: first=row0 last=row99 2016-08-15 14:51:15,936 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55757] regionserver.HStore(684): Region bounds: first= last= 2016-08-15 14:51:15,937 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55757] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:55740/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471297842714/f/1fd6c20ddea84f249d04a44ba5d03517 as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/f/b9918cdff4d9407c9793273bc1910fe1_SeqId_6_ 2016-08-15 14:51:15,938 INFO [B.defaultRpcServer.handler=2,queue=0,port=55757] regionserver.HStore(742): Loaded HFile hdfs://localhost:55740/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471297842714/f/1fd6c20ddea84f249d04a44ba5d03517 into store 'f' as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/f/b9918cdff4d9407c9793273bc1910fe1_SeqId_6_ - updating store file list. 2016-08-15 14:51:15,944 INFO [B.defaultRpcServer.handler=2,queue=0,port=55757] regionserver.HStore(777): Loaded HFile hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/f/b9918cdff4d9407c9793273bc1910fe1_SeqId_6_ into store 'f 2016-08-15 14:51:15,944 INFO [B.defaultRpcServer.handler=2,queue=0,port=55757] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:55740/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471297842714/f/1fd6c20ddea84f249d04a44ba5d03517 into store f (new location: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/f/b9918cdff4d9407c9793273bc1910fe1_SeqId_6_) 2016-08-15 14:51:15,944 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297811964 2016-08-15 14:51:15,947 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:51:15,948 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140024 2016-08-15 14:51:15,948 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:51:15,949 DEBUG [main] mapreduce.MapReduceRestoreService(90): Restore Job finished:0 2016-08-15 14:51:15,949 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (966947886) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:51:15,950 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140020 2016-08-15 14:51:15,949 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (1970097861) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:51:15,949 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56314 because read count=-1. Number of active connections: 12 2016-08-15 14:51:15,949 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56312 because read count=-1. Number of active connections: 12 2016-08-15 14:51:15,949 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel$8(566): IPC Client (143038845) to /10.22.9.171:55757 from tyu: closed 2016-08-15 14:51:15,949 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Listener(912): RpcServer.listener,port=55757: DISCONNECTING client 10.22.9.171:56315 because read count=-1. Number of active connections: 7 2016-08-15 14:51:15,950 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:51:15,951 INFO [main] impl.RestoreClientImpl(292): ns1:test-1471297750223 has been successfully restored to ns1:table1_restore 2016-08-15 14:51:15,951 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel$8(566): IPC Client (1698663338) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:51:15,951 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56216 because read count=-1. Number of active connections: 10 2016-08-15 14:51:15,951 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-15 14:51:15,951 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471297762157 hdfs://localhost:55740/backupUT/backup_1471297762157/ns1/test-1471297750223/ 2016-08-15 14:51:15,951 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471297810954 hdfs://localhost:55740/backupUT/backup_1471297810954/ns1/test-1471297750223/ 2016-08-15 14:51:15,951 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-15 14:51:15,952 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:55740/backupUT/backup_1471297762157/ns2/test-14712977502231/.backup.manifest 2016-08-15 14:51:15,955 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471297762157 2016-08-15 14:51:15,955 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471297762157/ns2/test-14712977502231/.backup.manifest 2016-08-15 14:51:15,955 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns2:test-14712977502231' to 'ns2:table2_restore' from full backup image hdfs://localhost:55740/backupUT/backup_1471297762157/ns2/test-14712977502231 2016-08-15 14:51:15,972 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x57e8d351 connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:51:15,975 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x57e8d3510x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:51:15,976 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d63ed15, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:51:15,976 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:51:15,976 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:51:15,977 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x57e8d351-0x156902d8a140025 connected 2016-08-15 14:51:15,978 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:51:15,978 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56320; # active connections: 10 2016-08-15 14:51:15,979 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:51:15,979 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56320 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:51:15,980 INFO [main] util.RestoreServerUtil(585): Truncating exising target table 'ns2:table2_restore', preserving region splits 2016-08-15 14:51:15,982 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:51:15,982 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56321; # active connections: 11 2016-08-15 14:51:15,982 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:51:15,982 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56321 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:51:15,983 INFO [main] client.HBaseAdmin$10(780): Started disable of ns2:table2_restore 2016-08-15 14:51:15,983 INFO [B.defaultRpcServer.handler=0,queue=0,port=55755] master.HMaster(1986): Client=tyu//10.22.9.171 disable ns2:table2_restore 2016-08-15 14:51:16,088 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55755] procedure2.ProcedureExecutor(669): Procedure DisableTableProcedure (table=ns2:table2_restore) id=21 owner=tyu state=RUNNABLE:DISABLE_TABLE_PREPARE added to the store. 2016-08-15 14:51:16,091 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-15 14:51:16,092 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns2:table2_restore/write-master:557550000000001 2016-08-15 14:51:16,193 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-15 14:51:16,305 DEBUG [ProcedureExecutor-4] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471297876305,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:table2_restore"} 2016-08-15 14:51:16,306 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:51:16,308 INFO [ProcedureExecutor-4] hbase.MetaTableAccessor(1700): Updated table ns2:table2_restore state to DISABLING in META 2016-08-15 14:51:16,396 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-15 14:51:16,413 INFO [ProcedureExecutor-4] procedure.DisableTableProcedure(395): Offlining 1 regions. 2016-08-15 14:51:16,415 DEBUG [10.22.9.171,55755,1471297724766-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(1352): Starting unassign of ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. (offlining), current state: {398ca33ca6e640575cac0c2baa029825 state=OPEN, ts=1471297833411, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:51:16,415 INFO [10.22.9.171,55755,1471297724766-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStates(1106): Transition {398ca33ca6e640575cac0c2baa029825 state=OPEN, ts=1471297833411, server=10.22.9.171,55757,1471297725443} to {398ca33ca6e640575cac0c2baa029825 state=PENDING_CLOSE, ts=1471297876415, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:51:16,415 INFO [10.22.9.171,55755,1471297724766-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. with state=PENDING_CLOSE 2016-08-15 14:51:16,415 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:51:16,416 INFO [PriorityRpcServer.handler=2,queue=0,port=55757] regionserver.RSRpcServices(1314): Close 398ca33ca6e640575cac0c2baa029825, moving to null 2016-08-15 14:51:16,417 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-1] handler.CloseRegionHandler(90): Processing close of ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:51:16,417 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-1] regionserver.HRegion(1419): Closing ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825.: disabling compactions & flushes 2016-08-15 14:51:16,417 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-1] regionserver.HRegion(1446): Updates disabled for region ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:51:16,417 DEBUG [10.22.9.171,55755,1471297724766-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(930): Sent CLOSE to 10.22.9.171,55757,1471297725443 for region ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:51:16,418 INFO [StoreCloserThread-ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825.-1] regionserver.HStore(839): Closed f 2016-08-15 14:51:16,419 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297811542 2016-08-15 14:51:16,424 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/recovered.edits/6.seqid to file, newSeqId=6, maxSeqId=2 2016-08-15 14:51:16,426 INFO [RS_CLOSE_REGION-10.22.9.171:55757-1] regionserver.HRegion(1552): Closed ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:51:16,426 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=55755] master.AssignmentManager(2884): Got transition CLOSED for {398ca33ca6e640575cac0c2baa029825 state=PENDING_CLOSE, ts=1471297876415, server=10.22.9.171,55757,1471297725443} from 10.22.9.171,55757,1471297725443 2016-08-15 14:51:16,427 INFO [B.defaultRpcServer.handler=1,queue=0,port=55755] master.RegionStates(1106): Transition {398ca33ca6e640575cac0c2baa029825 state=PENDING_CLOSE, ts=1471297876415, server=10.22.9.171,55757,1471297725443} to {398ca33ca6e640575cac0c2baa029825 state=OFFLINE, ts=1471297876427, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:51:16,427 INFO [B.defaultRpcServer.handler=1,queue=0,port=55755] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. with state=OFFLINE 2016-08-15 14:51:16,427 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:51:16,428 INFO [B.defaultRpcServer.handler=1,queue=0,port=55755] master.RegionStates(590): Offlined 398ca33ca6e640575cac0c2baa029825 from 10.22.9.171,55757,1471297725443 2016-08-15 14:51:16,429 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-1] handler.CloseRegionHandler(122): Closed ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:51:16,572 DEBUG [ProcedureExecutor-4] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471297876572,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:table2_restore"} 2016-08-15 14:51:16,574 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:51:16,575 INFO [ProcedureExecutor-4] hbase.MetaTableAccessor(1700): Updated table ns2:table2_restore state to DISABLED in META 2016-08-15 14:51:16,575 INFO [ProcedureExecutor-4] procedure.DisableTableProcedure(424): Disabled table, ns2:table2_restore, is completed. 2016-08-15 14:51:16,695 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@75965af8] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:55741 to delete [blk_1073741924_1100, blk_1073741925_1101, blk_1073741926_1102, blk_1073741927_1103, blk_1073741928_1104, blk_1073741929_1105, blk_1073741930_1106, blk_1073741931_1107, blk_1073741932_1108, blk_1073741933_1109, blk_1073741934_1110, blk_1073741935_1111, blk_1073741936_1112, blk_1073741937_1113, blk_1073741938_1114, blk_1073741939_1115, blk_1073741940_1116, blk_1073741941_1117, blk_1073741942_1118] 2016-08-15 14:51:16,698 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-15 14:51:16,790 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns2:table2_restore/write-master:557550000000001 2016-08-15 14:51:16,791 DEBUG [ProcedureExecutor-4] procedure2.ProcedureExecutor(870): Procedure completed in 699msec: DisableTableProcedure (table=ns2:table2_restore) id=21 owner=tyu state=FINISHED 2016-08-15 14:51:17,204 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-15 14:51:17,204 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: DISABLE, Table Name: ns2:table2_restore completed 2016-08-15 14:51:17,205 INFO [main] client.HBaseAdmin$8(615): Started truncating ns2:table2_restore 2016-08-15 14:51:17,206 INFO [B.defaultRpcServer.handler=4,queue=0,port=55755] master.HMaster(1848): Client=tyu//10.22.9.171 truncate ns2:table2_restore 2016-08-15 14:51:17,313 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] procedure2.ProcedureExecutor(669): Procedure TruncateTableProcedure (table=ns2:table2_restore preserveSplits=true) id=22 owner=tyu state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION added to the store. 2016-08-15 14:51:17,317 DEBUG [ProcedureExecutor-5] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns2:table2_restore/write-master:557550000000002 2016-08-15 14:51:17,318 DEBUG [ProcedureExecutor-5] procedure.TruncateTableProcedure(87): waiting for 'ns2:table2_restore' regions in transition 2016-08-15 14:51:17,424 DEBUG [ProcedureExecutor-5] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"info":[{"timestamp":1471297877424,"tag":[],"qualifier":"","vlen":0}]},"row":"ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825."} 2016-08-15 14:51:17,425 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:51:17,426 INFO [ProcedureExecutor-5] hbase.MetaTableAccessor(1854): Deleted [{ENCODED => 398ca33ca6e640575cac0c2baa029825, NAME => 'ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825.', STARTKEY => '', ENDKEY => ''}] 2016-08-15 14:51:17,427 DEBUG [ProcedureExecutor-5] procedure.DeleteTableProcedure(408): Removing 'ns2:table2_restore' from region states. 2016-08-15 14:51:17,428 DEBUG [ProcedureExecutor-5] procedure.DeleteTableProcedure(412): Marking 'ns2:table2_restore' as deleted. 2016-08-15 14:51:17,428 DEBUG [ProcedureExecutor-5] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"table":[{"timestamp":1471297877428,"tag":[],"qualifier":"state","vlen":0}]},"row":"ns2:table2_restore"} 2016-08-15 14:51:17,429 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:51:17,430 INFO [ProcedureExecutor-5] hbase.MetaTableAccessor(1726): Deleted table ns2:table2_restore state from META 2016-08-15 14:51:17,543 DEBUG [ProcedureExecutor-5] procedure.DeleteTableProcedure(340): Archiving region ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. from FS 2016-08-15 14:51:17,544 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(93): ARCHIVING hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825 2016-08-15 14:51:17,546 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(134): Archiving [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/f, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/recovered.edits] 2016-08-15 14:51:17,554 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/f/9e508b28cd4545e68980bb32d76801e5_SeqId_4_, to hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/archive/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/f/9e508b28cd4545e68980bb32d76801e5_SeqId_4_ 2016-08-15 14:51:17,559 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/recovered.edits/6.seqid, to hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/archive/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/recovered.edits/6.seqid 2016-08-15 14:51:17,559 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741914_1090 127.0.0.1:55741 2016-08-15 14:51:17,560 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(453): Deleted all region files in: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825 2016-08-15 14:51:17,560 DEBUG [ProcedureExecutor-5] procedure.DeleteTableProcedure(344): Table 'ns2:table2_restore' archived! 2016-08-15 14:51:17,562 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741913_1089 127.0.0.1:55741 2016-08-15 14:51:17,683 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741947_1123{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 290 2016-08-15 14:51:18,087 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns2/table2_restore/.tabledesc/.tableinfo.0000000001 2016-08-15 14:51:18,089 INFO [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(6162): creating HRegion ns2:table2_restore HTD == 'ns2:table2_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp Table name == ns2:table2_restore 2016-08-15 14:51:18,098 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741948_1124{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 45 2016-08-15 14:51:18,505 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(736): Instantiated ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:51:18,507 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1419): Closing ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825.: disabling compactions & flushes 2016-08-15 14:51:18,507 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1446): Updates disabled for region ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:51:18,507 INFO [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1552): Closed ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:51:18,615 DEBUG [ProcedureExecutor-5] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825."} 2016-08-15 14:51:18,617 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:51:18,618 INFO [ProcedureExecutor-5] hbase.MetaTableAccessor(1571): Added 1 2016-08-15 14:51:18,723 INFO [ProcedureExecutor-5] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,55757,1471297725443 2016-08-15 14:51:18,724 ERROR [ProcedureExecutor-5] master.TableStateManager(134): Unable to get table ns2:table2_restore state org.apache.hadoop.hbase.TableNotFoundException: ns2:table2_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:122) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:47) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-15 14:51:18,724 INFO [ProcedureExecutor-5] master.RegionStates(1106): Transition {398ca33ca6e640575cac0c2baa029825 state=OFFLINE, ts=1471297878723, server=null} to {398ca33ca6e640575cac0c2baa029825 state=PENDING_OPEN, ts=1471297878724, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:51:18,724 INFO [ProcedureExecutor-5] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. with state=PENDING_OPEN, sn=10.22.9.171,55757,1471297725443 2016-08-15 14:51:18,725 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:51:18,726 INFO [PriorityRpcServer.handler=3,queue=1,port=55757] regionserver.RSRpcServices(1666): Open ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:51:18,731 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-1] regionserver.HRegion(6339): Opening region: {ENCODED => 398ca33ca6e640575cac0c2baa029825, NAME => 'ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825.', STARTKEY => '', ENDKEY => ''} 2016-08-15 14:51:18,732 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-1] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table2_restore 398ca33ca6e640575cac0c2baa029825 2016-08-15 14:51:18,732 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-1] regionserver.HRegion(736): Instantiated ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:51:18,735 INFO [StoreOpener-398ca33ca6e640575cac0c2baa029825-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1102696, freeSize=1042859608, maxSize=1043962304, heapSize=1102696, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-15 14:51:18,735 INFO [StoreOpener-398ca33ca6e640575cac0c2baa029825-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-15 14:51:18,736 DEBUG [StoreOpener-398ca33ca6e640575cac0c2baa029825-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/f 2016-08-15 14:51:18,737 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-1] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825 2016-08-15 14:51:18,742 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-15 14:51:18,742 INFO [RS_OPEN_REGION-10.22.9.171:55757-1] regionserver.HRegion(871): Onlined 398ca33ca6e640575cac0c2baa029825; next sequenceid=2 2016-08-15 14:51:18,742 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297811542 2016-08-15 14:51:18,743 INFO [PostOpenDeployTasks:398ca33ca6e640575cac0c2baa029825] regionserver.HRegionServer(1952): Post open deploy tasks for ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:51:18,743 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] master.AssignmentManager(2884): Got transition OPENED for {398ca33ca6e640575cac0c2baa029825 state=PENDING_OPEN, ts=1471297878724, server=10.22.9.171,55757,1471297725443} from 10.22.9.171,55757,1471297725443 2016-08-15 14:51:18,743 INFO [B.defaultRpcServer.handler=3,queue=0,port=55755] master.RegionStates(1106): Transition {398ca33ca6e640575cac0c2baa029825 state=PENDING_OPEN, ts=1471297878724, server=10.22.9.171,55757,1471297725443} to {398ca33ca6e640575cac0c2baa029825 state=OPEN, ts=1471297878743, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:51:18,744 INFO [B.defaultRpcServer.handler=3,queue=0,port=55755] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. with state=OPEN, openSeqNum=2, server=10.22.9.171,55757,1471297725443 2016-08-15 14:51:18,744 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:51:18,745 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] master.RegionStates(452): Onlined 398ca33ca6e640575cac0c2baa029825 on 10.22.9.171,55757,1471297725443 2016-08-15 14:51:18,745 DEBUG [ProcedureExecutor-5] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,55757,1471297725443 2016-08-15 14:51:18,745 DEBUG [ProcedureExecutor-5] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471297878745,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:table2_restore"} 2016-08-15 14:51:18,745 ERROR [B.defaultRpcServer.handler=3,queue=0,port=55755] master.TableStateManager(134): Unable to get table ns2:table2_restore state org.apache.hadoop.hbase.TableNotFoundException: ns2:table2_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-15 14:51:18,745 DEBUG [PostOpenDeployTasks:398ca33ca6e640575cac0c2baa029825] regionserver.HRegionServer(1979): Finished post open deploy task for ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:51:18,746 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-1] handler.OpenRegionHandler(126): Opened ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. on 10.22.9.171,55757,1471297725443 2016-08-15 14:51:18,746 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:51:18,747 INFO [ProcedureExecutor-5] hbase.MetaTableAccessor(1700): Updated table ns2:table2_restore state to ENABLED in META 2016-08-15 14:51:18,855 DEBUG [ProcedureExecutor-5] procedure.TruncateTableProcedure(129): truncate 'ns2:table2_restore' completed 2016-08-15 14:51:18,965 DEBUG [ProcedureExecutor-5] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns2:table2_restore/write-master:557550000000002 2016-08-15 14:51:18,965 DEBUG [ProcedureExecutor-5] procedure2.ProcedureExecutor(870): Procedure completed in 1.6500sec: TruncateTableProcedure (table=ns2:table2_restore preserveSplits=true) id=22 owner=tyu state=FINISHED 2016-08-15 14:51:19,079 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=22 2016-08-15 14:51:19,080 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: TRUNCATE, Table Name: ns2:table2_restore completed 2016-08-15 14:51:19,080 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:51:19,080 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140025 2016-08-15 14:51:19,083 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:51:19,084 DEBUG [main] util.RestoreServerUtil(255): cluster hold the backup image: hdfs://localhost:55740; local cluster node: hdfs://localhost:55740 2016-08-15 14:51:19,084 DEBUG [main] util.RestoreServerUtil(261): File hdfs://localhost:55740/backupUT/backup_1471297762157/ns2/test-14712977502231/archive/data/ns2/test-14712977502231 on local cluster, back it up before restore 2016-08-15 14:51:19,084 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56321 because read count=-1. Number of active connections: 11 2016-08-15 14:51:19,084 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel$8(566): IPC Client (783339103) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:51:19,084 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56320 because read count=-1. Number of active connections: 11 2016-08-15 14:51:19,084 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel$8(566): IPC Client (-1055086367) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:51:19,102 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741949_1125{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 12093 2016-08-15 14:51:19,505 DEBUG [main] util.RestoreServerUtil(271): Copied to temporary path on local cluster: /user/tyu/hbase-staging/restore 2016-08-15 14:51:19,507 DEBUG [main] util.RestoreServerUtil(355): TableArchivePath for bulkload using tempPath: /user/tyu/hbase-staging/restore 2016-08-15 14:51:19,526 DEBUG [main] util.RestoreServerUtil(363): Restoring HFiles from directory hdfs://localhost:55740/user/tyu/hbase-staging/restore/7ac1188f2e9c4e31e67f0d3df5f7670d 2016-08-15 14:51:19,526 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x256a78f9 connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:51:19,531 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x256a78f90x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:51:19,532 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4c673509, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:51:19,532 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:51:19,532 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:51:19,533 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x256a78f9-0x156902d8a140026 connected 2016-08-15 14:51:19,535 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:51:19,535 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56326; # active connections: 10 2016-08-15 14:51:19,535 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:51:19,536 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56326 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:51:19,542 DEBUG [main] client.ConnectionImplementation(604): Table ns2:table2_restore should be available 2016-08-15 14:51:19,551 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:51:19,551 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56327; # active connections: 11 2016-08-15 14:51:19,552 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:51:19,553 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56327 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:51:19,557 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1102696, freeSize=1042859608, maxSize=1043962304, heapSize=1102696, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-15 14:51:19,561 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:55740/user/tyu/hbase-staging/restore/7ac1188f2e9c4e31e67f0d3df5f7670d/f/d40b937f84a74f16a772129b5836b4f2 first=row0 last=row99 2016-08-15 14:51:19,564 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825., hostname=10.22.9.171,55757,1471297725443, seqNum=2 for row with hfile group [{[B@2f9b88e7,hdfs://localhost:55740/user/tyu/hbase-staging/restore/7ac1188f2e9c4e31e67f0d3df5f7670d/f/d40b937f84a74f16a772129b5836b4f2}] 2016-08-15 14:51:19,566 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:51:19,566 DEBUG [RpcServer.listener,port=55757] ipc.RpcServer$Listener(880): RpcServer.listener,port=55757: connection from 10.22.9.171:56328; # active connections: 7 2016-08-15 14:51:19,566 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:51:19,567 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56328 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:51:19,567 INFO [B.defaultRpcServer.handler=4,queue=0,port=55757] regionserver.HStore(670): Validating hfile at hdfs://localhost:55740/user/tyu/hbase-staging/restore/7ac1188f2e9c4e31e67f0d3df5f7670d/f/d40b937f84a74f16a772129b5836b4f2 for inclusion in store f region ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:51:19,570 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55757] regionserver.HStore(682): HFile bounds: first=row0 last=row99 2016-08-15 14:51:19,570 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55757] regionserver.HStore(684): Region bounds: first= last= 2016-08-15 14:51:19,572 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55757] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:55740/user/tyu/hbase-staging/restore/7ac1188f2e9c4e31e67f0d3df5f7670d/f/d40b937f84a74f16a772129b5836b4f2 as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/f/05c5a866529d48db90729fe61b4ff7e0_SeqId_4_ 2016-08-15 14:51:19,573 INFO [B.defaultRpcServer.handler=4,queue=0,port=55757] regionserver.HStore(742): Loaded HFile hdfs://localhost:55740/user/tyu/hbase-staging/restore/7ac1188f2e9c4e31e67f0d3df5f7670d/f/d40b937f84a74f16a772129b5836b4f2 into store 'f' as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/f/05c5a866529d48db90729fe61b4ff7e0_SeqId_4_ - updating store file list. 2016-08-15 14:51:19,579 INFO [B.defaultRpcServer.handler=4,queue=0,port=55757] regionserver.HStore(777): Loaded HFile hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/f/05c5a866529d48db90729fe61b4ff7e0_SeqId_4_ into store 'f 2016-08-15 14:51:19,579 INFO [B.defaultRpcServer.handler=4,queue=0,port=55757] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:55740/user/tyu/hbase-staging/restore/7ac1188f2e9c4e31e67f0d3df5f7670d/f/d40b937f84a74f16a772129b5836b4f2 into store f (new location: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/f/05c5a866529d48db90729fe61b4ff7e0_SeqId_4_) 2016-08-15 14:51:19,579 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297811542 2016-08-15 14:51:19,580 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:51:19,581 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140026 2016-08-15 14:51:19,581 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:51:19,582 INFO [main] impl.RestoreClientImpl(284): Restoring 'ns2:test-14712977502231' to 'ns2:table2_restore' from log dirs: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs 2016-08-15 14:51:19,582 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56327 because read count=-1. Number of active connections: 11 2016-08-15 14:51:19,582 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56326 because read count=-1. Number of active connections: 11 2016-08-15 14:51:19,582 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel$8(566): IPC Client (-171456880) to /10.22.9.171:55757 from tyu: closed 2016-08-15 14:51:19,582 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (-826605183) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:51:19,582 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (-1725551187) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:51:19,582 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Listener(912): RpcServer.listener,port=55757: DISCONNECTING client 10.22.9.171:56328 because read count=-1. Number of active connections: 7 2016-08-15 14:51:19,583 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xc308bc3 connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:51:19,585 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0xc308bc30x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:51:19,586 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2619d632, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:51:19,586 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:51:19,586 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:51:19,587 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0xc308bc3-0x156902d8a140027 connected 2016-08-15 14:51:19,588 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:51:19,588 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56330; # active connections: 10 2016-08-15 14:51:19,589 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:51:19,589 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56330 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:51:19,590 INFO [main] mapreduce.MapReduceRestoreService(56): Restore incremental backup from directory hdfs://localhost:55740/backupUT/backup_1471297810954/WALs from hbase tables ,ns2:test-14712977502231 to tables ,ns2:table2_restore 2016-08-15 14:51:19,590 INFO [main] mapreduce.MapReduceRestoreService(61): Restore ns2:test-14712977502231 into ns2:table2_restore 2016-08-15 14:51:19,591 DEBUG [main] mapreduce.WALPlayer(299): add incremental job :/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471297879590 2016-08-15 14:51:19,591 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6cca2408 connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:51:19,593 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x6cca24080x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:51:19,594 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@142c4a8e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:51:19,594 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:51:19,594 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:51:19,595 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x6cca2408-0x156902d8a140028 connected 2016-08-15 14:51:19,597 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:51:19,597 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56332; # active connections: 11 2016-08-15 14:51:19,597 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:51:19,598 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56332 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:51:19,599 INFO [main] mapreduce.HFileOutputFormat2(478): bulkload locality sensitive enabled 2016-08-15 14:51:19,599 INFO [main] mapreduce.HFileOutputFormat2(483): Looking up current regions for table ns2:test-14712977502231 2016-08-15 14:51:19,602 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:51:19,602 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56333; # active connections: 12 2016-08-15 14:51:19,603 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:51:19,603 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56333 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:51:19,606 INFO [main] mapreduce.HFileOutputFormat2(485): Configuring 1 reduce partitions to match current region count 2016-08-15 14:51:19,606 INFO [main] mapreduce.HFileOutputFormat2(378): Writing partition information to /user/tyu/hbase-staging/partitions_a1f22f96-76f7-483b-9f9f-a180499e9032 2016-08-15 14:51:19,612 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741950_1126{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 153 2016-08-15 14:51:19,700 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@75965af8] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:55741 to delete [blk_1073741913_1089, blk_1073741914_1090] 2016-08-15 14:51:20,021 WARN [main] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-15 14:51:20,484 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0001_000001 (auth:SIMPLE) 2016-08-15 14:51:20,758 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-3182228954114404507.jar 2016-08-15 14:51:21,933 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-15 14:51:29,799 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-5187337068967932737.jar 2016-08-15 14:51:31,445 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-4925587583299737248.jar 2016-08-15 14:51:31,494 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-5570038143304833600.jar 2016-08-15 14:51:38,339 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-6950165602652139384.jar 2016-08-15 14:51:38,340 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-15 14:51:38,340 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-15 14:51:38,340 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-15 14:51:38,340 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-15 14:51:38,341 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-15 14:51:38,341 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-15 14:51:38,555 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-883358424510198035.jar 2016-08-15 14:51:38,555 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-883358424510198035.jar 2016-08-15 14:51:39,748 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.WALInputFormat, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-2542316006057940512.jar 2016-08-15 14:51:39,749 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-883358424510198035.jar 2016-08-15 14:51:39,749 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-883358424510198035.jar 2016-08-15 14:51:39,749 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-2542316006057940512.jar 2016-08-15 14:51:39,750 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-15 14:51:39,750 INFO [main] mapreduce.HFileOutputFormat2(498): Incremental table ns2:test-14712977502231 output configured. 2016-08-15 14:51:39,750 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:51:39,750 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140028 2016-08-15 14:51:39,751 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:51:39,752 DEBUG [main] mapreduce.WALPlayer(316): success configuring load incremental job 2016-08-15 14:51:39,752 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56332 because read count=-1. Number of active connections: 12 2016-08-15 14:51:39,752 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel$8(566): IPC Client (372639812) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:51:39,752 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56333 because read count=-1. Number of active connections: 12 2016-08-15 14:51:39,752 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel$8(566): IPC Client (647852260) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:51:39,753 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.base.Preconditions, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-15 14:51:39,790 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741951_1127{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 1556922 2016-08-15 14:51:40,211 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741952_1128{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 533455 2016-08-15 14:51:40,641 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741953_1129{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:51:40,649 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741954_1130{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 38156 2016-08-15 14:51:41,066 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741955_1131{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 662656 2016-08-15 14:51:41,483 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741956_1132{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 662656 2016-08-15 14:51:41,901 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741957_1133{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 112558 2016-08-15 14:51:42,319 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741958_1134{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 1475955 2016-08-15 14:51:42,735 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741959_1135{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:51:42,759 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741960_1136{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 0 2016-08-15 14:51:42,769 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741961_1137{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 0 2016-08-15 14:51:42,780 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741962_1138{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 1351207 2016-08-15 14:51:43,198 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741963_1139{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 1795932 2016-08-15 14:51:43,627 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741964_1140{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:51:43,628 WARN [main] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-15 14:51:43,641 DEBUG [main] mapreduce.WALInputFormat(263): Scanning hdfs://localhost:55740/backupUT/backup_1471297810954/WALs for WAL files 2016-08-15 14:51:43,642 WARN [main] mapreduce.WALInputFormat(286): File hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/.backup.manifest does not appear to be an WAL file. Skipping... 2016-08-15 14:51:43,642 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531 2016-08-15 14:51:43,642 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297731200 2016-08-15 14:51:43,642 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531 2016-08-15 14:51:43,642 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297733730 2016-08-15 14:51:43,643 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297754595 2016-08-15 14:51:43,643 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297757231 2016-08-15 14:51:43,650 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741965_1141{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 1237 2016-08-15 14:51:44,060 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741966_1142{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 0 2016-08-15 14:51:44,091 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741967_1143{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 134678 2016-08-15 14:51:44,539 WARN [ResourceManager Event Processor] capacity.LeafQueue(632): maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-15 14:51:44,539 WARN [ResourceManager Event Processor] capacity.LeafQueue(653): maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-15 14:51:45,290 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0002_000001 (auth:SIMPLE) 2016-08-15 14:51:47,309 DEBUG [10.22.9.171,55757,1471297725443_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-15 14:51:47,309 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-15 14:51:49,947 INFO [10.22.9.171,55755,1471297724766_ChoreService_1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x561363b connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:51:49,951 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x561363b0x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:51:49,952 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@52e980e5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:51:49,952 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:51:49,952 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:51:49,952 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(580): Has backup sessions from hbase:backup 2016-08-15 14:51:49,953 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x561363b-0x156902d8a140029 connected 2016-08-15 14:51:49,955 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:51:49,955 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56382; # active connections: 11 2016-08-15 14:51:49,956 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:51:49,956 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56382 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:51:49,964 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:51:49,964 DEBUG [RpcServer.listener,port=55757] ipc.RpcServer$Listener(880): RpcServer.listener,port=55757: connection from 10.22.9.171:56383; # active connections: 7 2016-08-15 14:51:49,965 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:51:49,965 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56383 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:51:49,970 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297729233 2016-08-15 14:51:49,971 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297729233 2016-08-15 14:51:49,971 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531 2016-08-15 14:51:49,972 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531 2016-08-15 14:51:49,972 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297762961 2016-08-15 14:51:49,973 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(80): Didn't find this log in hbase:backup, keeping: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297762961 2016-08-15 14:51:49,973 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297729233 2016-08-15 14:51:49,974 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297729233 2016-08-15 14:51:49,974 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531 2016-08-15 14:51:49,975 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531 2016-08-15 14:51:49,975 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297754595 2016-08-15 14:51:49,976 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297754595 2016-08-15 14:51:49,976 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297757231 2016-08-15 14:51:49,977 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297757231 2016-08-15 14:51:49,977 INFO [10.22.9.171,55755,1471297724766_ChoreService_1] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140029 2016-08-15 14:51:49,978 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:51:49,978 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56382 because read count=-1. Number of active connections: 11 2016-08-15 14:51:49,978 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Listener(912): RpcServer.listener,port=55757: DISCONNECTING client 10.22.9.171:56383 because read count=-1. Number of active connections: 7 2016-08-15 14:51:49,978 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel$8(566): IPC Client (263293391) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:51:49,978 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel$8(566): IPC Client (-943645674) to /10.22.9.171:55757 from tyu: closed 2016-08-15 14:51:50,154 INFO [Socket Reader #1 for port 55828] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0002_000001 (auth:SIMPLE) 2016-08-15 14:51:50,405 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741968_1144{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:51:52,382 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0002_000001 (auth:SIMPLE) 2016-08-15 14:51:52,382 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0002_000001 (auth:SIMPLE) 2016-08-15 14:51:53,242 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0002_000001 (auth:SIMPLE) 2016-08-15 14:51:53,243 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0002_000001 (auth:SIMPLE) 2016-08-15 14:51:53,568 DEBUG [10.22.9.171,55793,1471297733428_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-15 14:51:53,591 DEBUG [10.22.9.171,55789,1471297733379_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-15 14:51:53,843 DEBUG [region-location-3] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/meta/1588230740/info 2016-08-15 14:51:53,843 DEBUG [region-location-4] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/namespace/606bab3f14856574a09bb943381ad7b3/info 2016-08-15 14:51:53,843 DEBUG [region-location-2] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/backup/3ca5bb17c6b62ed61d22875df8c133ea/meta 2016-08-15 14:51:53,843 DEBUG [region-location-3] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/meta/1588230740/table 2016-08-15 14:51:53,844 DEBUG [region-location-2] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/backup/3ca5bb17c6b62ed61d22875df8c133ea/session 2016-08-15 14:51:54,261 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0002_000001 (auth:SIMPLE) 2016-08-15 14:51:55,271 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0002_000001 (auth:SIMPLE) 2016-08-15 14:51:56,822 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0002_000001 (auth:SIMPLE) 2016-08-15 14:51:56,845 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0002_01_000002 is : 143 2016-08-15 14:51:58,451 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0002_000001 (auth:SIMPLE) 2016-08-15 14:51:58,474 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0002_01_000004 is : 143 2016-08-15 14:51:58,619 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0002_000001 (auth:SIMPLE) 2016-08-15 14:51:58,642 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0002_01_000003 is : 143 2016-08-15 14:51:58,688 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0002_000001 (auth:SIMPLE) 2016-08-15 14:51:58,709 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0002_01_000005 is : 143 2016-08-15 14:51:59,311 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0002_000001 (auth:SIMPLE) 2016-08-15 14:51:59,406 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0002_000001 (auth:SIMPLE) 2016-08-15 14:51:59,425 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0002_01_000006 is : 143 2016-08-15 14:51:59,844 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0002_000001 (auth:SIMPLE) 2016-08-15 14:51:59,860 WARN [ContainersLauncher #3] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0002_01_000007 is : 143 2016-08-15 14:52:03,497 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56434; # active connections: 11 2016-08-15 14:52:03,869 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:52:03,869 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56434 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:52:04,063 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56434 because read count=-1. Number of active connections: 11 2016-08-15 14:52:04,686 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741970_1146{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 0 2016-08-15 14:52:04,712 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0002_000001 (auth:SIMPLE) 2016-08-15 14:52:04,728 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0002_01_000008 is : 143 2016-08-15 14:52:04,757 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741969_1145{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 64386 2016-08-15 14:52:05,171 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741971_1147{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 0 2016-08-15 14:52:05,194 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741972_1148{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:52:05,212 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741973_1149{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:52:06,234 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741965_1141 127.0.0.1:55741 2016-08-15 14:52:06,234 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741966_1142 127.0.0.1:55741 2016-08-15 14:52:06,234 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741967_1143 127.0.0.1:55741 2016-08-15 14:52:06,234 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741969_1145 127.0.0.1:55741 2016-08-15 14:52:06,234 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741968_1144 127.0.0.1:55741 2016-08-15 14:52:06,235 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741963_1139 127.0.0.1:55741 2016-08-15 14:52:06,235 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741964_1140 127.0.0.1:55741 2016-08-15 14:52:06,235 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741956_1132 127.0.0.1:55741 2016-08-15 14:52:06,235 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741962_1138 127.0.0.1:55741 2016-08-15 14:52:06,235 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741960_1136 127.0.0.1:55741 2016-08-15 14:52:06,235 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741954_1130 127.0.0.1:55741 2016-08-15 14:52:06,235 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741953_1129 127.0.0.1:55741 2016-08-15 14:52:06,235 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741955_1131 127.0.0.1:55741 2016-08-15 14:52:06,235 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741951_1127 127.0.0.1:55741 2016-08-15 14:52:06,236 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741958_1134 127.0.0.1:55741 2016-08-15 14:52:06,236 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741957_1133 127.0.0.1:55741 2016-08-15 14:52:06,236 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741959_1135 127.0.0.1:55741 2016-08-15 14:52:06,236 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741952_1128 127.0.0.1:55741 2016-08-15 14:52:06,236 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741961_1137 127.0.0.1:55741 2016-08-15 14:52:06,761 DEBUG [main] mapreduce.MapReduceRestoreService(78): Restoring HFiles from directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471297879590 2016-08-15 14:52:06,761 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4cab4509 connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:52:06,766 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x4cab45090x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:52:06,766 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d055fe7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:52:06,767 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:52:06,767 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:52:06,767 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x4cab4509-0x156902d8a14002b connected 2016-08-15 14:52:06,769 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:52:06,769 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56442; # active connections: 11 2016-08-15 14:52:06,769 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:52:06,770 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56442 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:52:06,776 DEBUG [main] client.ConnectionImplementation(604): Table ns2:table2_restore should be available 2016-08-15 14:52:06,779 WARN [main] mapreduce.LoadIncrementalHFiles(199): Skipping non-directory hdfs://localhost:55740/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471297879590/_SUCCESS 2016-08-15 14:52:06,784 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:52:06,784 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56445; # active connections: 12 2016-08-15 14:52:06,785 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:52:06,786 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56445 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:52:06,790 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1102696, freeSize=1042859608, maxSize=1043962304, heapSize=1102696, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-15 14:52:06,794 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:55740/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471297879590/f/01d10cfb3da14d1c8e51d66cf142dd42 first=row0 last=row99 2016-08-15 14:52:06,797 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825., hostname=10.22.9.171,55757,1471297725443, seqNum=2 for row with hfile group [{[B@266a0e5c,hdfs://localhost:55740/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471297879590/f/01d10cfb3da14d1c8e51d66cf142dd42}] 2016-08-15 14:52:06,798 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:52:06,798 DEBUG [RpcServer.listener,port=55757] ipc.RpcServer$Listener(880): RpcServer.listener,port=55757: connection from 10.22.9.171:56446; # active connections: 7 2016-08-15 14:52:06,799 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:52:06,799 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56446 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:52:06,799 INFO [B.defaultRpcServer.handler=0,queue=0,port=55757] regionserver.HStore(670): Validating hfile at hdfs://localhost:55740/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471297879590/f/01d10cfb3da14d1c8e51d66cf142dd42 for inclusion in store f region ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:52:06,803 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55757] regionserver.HStore(682): HFile bounds: first=row0 last=row99 2016-08-15 14:52:06,803 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55757] regionserver.HStore(684): Region bounds: first= last= 2016-08-15 14:52:06,805 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55757] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:55740/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471297879590/f/01d10cfb3da14d1c8e51d66cf142dd42 as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/f/cb0dbaec1fcf46f9afa77d00877ac351_SeqId_6_ 2016-08-15 14:52:06,806 INFO [B.defaultRpcServer.handler=0,queue=0,port=55757] regionserver.HStore(742): Loaded HFile hdfs://localhost:55740/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471297879590/f/01d10cfb3da14d1c8e51d66cf142dd42 into store 'f' as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/f/cb0dbaec1fcf46f9afa77d00877ac351_SeqId_6_ - updating store file list. 2016-08-15 14:52:06,811 INFO [B.defaultRpcServer.handler=0,queue=0,port=55757] regionserver.HStore(777): Loaded HFile hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/f/cb0dbaec1fcf46f9afa77d00877ac351_SeqId_6_ into store 'f 2016-08-15 14:52:06,811 INFO [B.defaultRpcServer.handler=0,queue=0,port=55757] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:55740/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471297879590/f/01d10cfb3da14d1c8e51d66cf142dd42 into store f (new location: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/f/cb0dbaec1fcf46f9afa77d00877ac351_SeqId_6_) 2016-08-15 14:52:06,812 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297811542 2016-08-15 14:52:06,814 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:52:06,815 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a14002b 2016-08-15 14:52:06,816 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:52:06,817 DEBUG [main] mapreduce.MapReduceRestoreService(90): Restore Job finished:0 2016-08-15 14:52:06,817 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56445 because read count=-1. Number of active connections: 12 2016-08-15 14:52:06,817 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel$8(566): IPC Client (2005167513) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:52:06,817 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56442 because read count=-1. Number of active connections: 12 2016-08-15 14:52:06,817 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Listener(912): RpcServer.listener,port=55757: DISCONNECTING client 10.22.9.171:56446 because read count=-1. Number of active connections: 7 2016-08-15 14:52:06,817 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (195461589) to /10.22.9.171:55757 from tyu: closed 2016-08-15 14:52:06,817 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel$8(566): IPC Client (-1363843680) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:52:06,817 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140027 2016-08-15 14:52:06,818 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:52:06,818 INFO [main] impl.RestoreClientImpl(292): ns2:test-14712977502231 has been successfully restored to ns2:table2_restore 2016-08-15 14:52:06,819 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-15 14:52:06,819 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471297762157 hdfs://localhost:55740/backupUT/backup_1471297762157/ns2/test-14712977502231/ 2016-08-15 14:52:06,819 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471297810954 hdfs://localhost:55740/backupUT/backup_1471297810954/ns2/test-14712977502231/ 2016-08-15 14:52:06,819 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56330 because read count=-1. Number of active connections: 10 2016-08-15 14:52:06,819 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel$8(566): IPC Client (-1766636611) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:52:06,819 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-15 14:52:06,820 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:55740/backupUT/backup_1471297762157/ns3/test-14712977502232/.backup.manifest 2016-08-15 14:52:06,823 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471297762157 2016-08-15 14:52:06,823 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471297762157/ns3/test-14712977502232/.backup.manifest 2016-08-15 14:52:06,823 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns3:test-14712977502232' to 'ns3:table3_restore' from full backup image hdfs://localhost:55740/backupUT/backup_1471297762157/ns3/test-14712977502232 2016-08-15 14:52:06,829 DEBUG [main] util.RestoreServerUtil(109): Folder tableArchivePath: hdfs://localhost:55740/backupUT/backup_1471297762157/ns3/test-14712977502232/archive/data/ns3/test-14712977502232 does not exists 2016-08-15 14:52:06,830 DEBUG [main] util.RestoreServerUtil(315): find table descriptor but no archive dir for table ns3:test-14712977502232, will only create table 2016-08-15 14:52:06,830 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3581fee9 connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:52:06,832 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x3581fee90x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:52:06,833 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6e5cbb6c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:52:06,833 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:52:06,833 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:52:06,834 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x3581fee9-0x156902d8a14002c connected 2016-08-15 14:52:06,835 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:52:06,836 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56451; # active connections: 10 2016-08-15 14:52:06,836 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:52:06,837 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56451 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:52:06,838 INFO [main] util.RestoreServerUtil(585): Truncating exising target table 'ns3:table3_restore', preserving region splits 2016-08-15 14:52:06,839 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:52:06,839 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56452; # active connections: 11 2016-08-15 14:52:06,840 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:52:06,840 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56452 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:52:06,841 INFO [main] client.HBaseAdmin$10(780): Started disable of ns3:table3_restore 2016-08-15 14:52:06,841 INFO [B.defaultRpcServer.handler=2,queue=0,port=55755] master.HMaster(1986): Client=tyu//10.22.9.171 disable ns3:table3_restore 2016-08-15 14:52:06,948 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] procedure2.ProcedureExecutor(669): Procedure DisableTableProcedure (table=ns3:table3_restore) id=23 owner=tyu state=RUNNABLE:DISABLE_TABLE_PREPARE added to the store. 2016-08-15 14:52:06,951 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-15 14:52:06,951 DEBUG [ProcedureExecutor-6] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns3:table3_restore/write-master:557550000000001 2016-08-15 14:52:07,055 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-15 14:52:07,162 DEBUG [ProcedureExecutor-6] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471297927161,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:table3_restore"} 2016-08-15 14:52:07,163 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:52:07,165 INFO [ProcedureExecutor-6] hbase.MetaTableAccessor(1700): Updated table ns3:table3_restore state to DISABLING in META 2016-08-15 14:52:07,260 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-15 14:52:07,274 INFO [ProcedureExecutor-6] procedure.DisableTableProcedure(395): Offlining 1 regions. 2016-08-15 14:52:07,276 DEBUG [10.22.9.171,55755,1471297724766-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(1352): Starting unassign of ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. (offlining), current state: {1945f514e609ff061d2c4aee1cdb82e3 state=OPEN, ts=1471297835757, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:52:07,276 INFO [10.22.9.171,55755,1471297724766-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStates(1106): Transition {1945f514e609ff061d2c4aee1cdb82e3 state=OPEN, ts=1471297835757, server=10.22.9.171,55757,1471297725443} to {1945f514e609ff061d2c4aee1cdb82e3 state=PENDING_CLOSE, ts=1471297927276, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:52:07,276 INFO [10.22.9.171,55755,1471297724766-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. with state=PENDING_CLOSE 2016-08-15 14:52:07,277 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:52:07,278 INFO [PriorityRpcServer.handler=0,queue=0,port=55757] regionserver.RSRpcServices(1314): Close 1945f514e609ff061d2c4aee1cdb82e3, moving to null 2016-08-15 14:52:07,279 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] handler.CloseRegionHandler(90): Processing close of ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:52:07,279 DEBUG [10.22.9.171,55755,1471297724766-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(930): Sent CLOSE to 10.22.9.171,55757,1471297725443 for region ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:52:07,279 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1419): Closing ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3.: disabling compactions & flushes 2016-08-15 14:52:07,280 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1446): Updates disabled for region ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:52:07,280 INFO [StoreCloserThread-ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3.-1] regionserver.HStore(839): Closed f 2016-08-15 14:52:07,281 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297811122 2016-08-15 14:52:07,286 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns3/table3_restore/1945f514e609ff061d2c4aee1cdb82e3/recovered.edits/4.seqid to file, newSeqId=4, maxSeqId=2 2016-08-15 14:52:07,287 INFO [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1552): Closed ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:52:07,288 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.AssignmentManager(2884): Got transition CLOSED for {1945f514e609ff061d2c4aee1cdb82e3 state=PENDING_CLOSE, ts=1471297927276, server=10.22.9.171,55757,1471297725443} from 10.22.9.171,55757,1471297725443 2016-08-15 14:52:07,289 INFO [B.defaultRpcServer.handler=4,queue=0,port=55755] master.RegionStates(1106): Transition {1945f514e609ff061d2c4aee1cdb82e3 state=PENDING_CLOSE, ts=1471297927276, server=10.22.9.171,55757,1471297725443} to {1945f514e609ff061d2c4aee1cdb82e3 state=OFFLINE, ts=1471297927289, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:52:07,289 INFO [B.defaultRpcServer.handler=4,queue=0,port=55755] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. with state=OFFLINE 2016-08-15 14:52:07,289 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:52:07,290 INFO [B.defaultRpcServer.handler=4,queue=0,port=55755] master.RegionStates(590): Offlined 1945f514e609ff061d2c4aee1cdb82e3 from 10.22.9.171,55757,1471297725443 2016-08-15 14:52:07,291 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] handler.CloseRegionHandler(122): Closed ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:52:07,433 DEBUG [ProcedureExecutor-6] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471297927433,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:table3_restore"} 2016-08-15 14:52:07,435 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:52:07,437 INFO [ProcedureExecutor-6] hbase.MetaTableAccessor(1700): Updated table ns3:table3_restore state to DISABLED in META 2016-08-15 14:52:07,437 INFO [ProcedureExecutor-6] procedure.DisableTableProcedure(424): Disabled table, ns3:table3_restore, is completed. 2016-08-15 14:52:07,567 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-15 14:52:07,611 INFO [IPC Server handler 7 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741971_1147 127.0.0.1:55741 2016-08-15 14:52:07,615 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741944_1120 127.0.0.1:55741 2016-08-15 14:52:07,650 DEBUG [ProcedureExecutor-6] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns3:table3_restore/write-master:557550000000001 2016-08-15 14:52:07,651 DEBUG [ProcedureExecutor-6] procedure2.ProcedureExecutor(870): Procedure completed in 702msec: DisableTableProcedure (table=ns3:table3_restore) id=23 owner=tyu state=FINISHED 2016-08-15 14:52:07,742 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@75965af8] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:55741 to delete [blk_1073741952_1128, blk_1073741953_1129, blk_1073741954_1130, blk_1073741955_1131, blk_1073741956_1132, blk_1073741957_1133, blk_1073741958_1134, blk_1073741959_1135, blk_1073741960_1136, blk_1073741961_1137, blk_1073741962_1138, blk_1073741963_1139, blk_1073741964_1140, blk_1073741965_1141, blk_1073741966_1142, blk_1073741967_1143, blk_1073741968_1144, blk_1073741969_1145, blk_1073741971_1147, blk_1073741944_1120, blk_1073741951_1127] 2016-08-15 14:52:08,069 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-15 14:52:08,070 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: DISABLE, Table Name: ns3:table3_restore completed 2016-08-15 14:52:08,072 INFO [main] client.HBaseAdmin$8(615): Started truncating ns3:table3_restore 2016-08-15 14:52:08,073 INFO [B.defaultRpcServer.handler=0,queue=0,port=55755] master.HMaster(1848): Client=tyu//10.22.9.171 truncate ns3:table3_restore 2016-08-15 14:52:08,178 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=55755] procedure2.ProcedureExecutor(669): Procedure TruncateTableProcedure (table=ns3:table3_restore preserveSplits=true) id=24 owner=tyu state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION added to the store. 2016-08-15 14:52:08,181 DEBUG [ProcedureExecutor-7] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns3:table3_restore/write-master:557550000000002 2016-08-15 14:52:08,182 DEBUG [ProcedureExecutor-7] procedure.TruncateTableProcedure(87): waiting for 'ns3:table3_restore' regions in transition 2016-08-15 14:52:08,289 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"info":[{"timestamp":1471297928289,"tag":[],"qualifier":"","vlen":0}]},"row":"ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3."} 2016-08-15 14:52:08,290 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:52:08,292 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1854): Deleted [{ENCODED => 1945f514e609ff061d2c4aee1cdb82e3, NAME => 'ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3.', STARTKEY => '', ENDKEY => ''}] 2016-08-15 14:52:08,294 DEBUG [ProcedureExecutor-7] procedure.DeleteTableProcedure(408): Removing 'ns3:table3_restore' from region states. 2016-08-15 14:52:08,297 DEBUG [ProcedureExecutor-7] procedure.DeleteTableProcedure(412): Marking 'ns3:table3_restore' as deleted. 2016-08-15 14:52:08,298 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"table":[{"timestamp":1471297928298,"tag":[],"qualifier":"state","vlen":0}]},"row":"ns3:table3_restore"} 2016-08-15 14:52:08,299 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:52:08,300 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1726): Deleted table ns3:table3_restore state from META 2016-08-15 14:52:08,409 DEBUG [ProcedureExecutor-7] procedure.DeleteTableProcedure(340): Archiving region ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. from FS 2016-08-15 14:52:08,409 DEBUG [ProcedureExecutor-7] backup.HFileArchiver(93): ARCHIVING hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns3/table3_restore/1945f514e609ff061d2c4aee1cdb82e3 2016-08-15 14:52:08,412 DEBUG [ProcedureExecutor-7] backup.HFileArchiver(134): Archiving [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns3/table3_restore/1945f514e609ff061d2c4aee1cdb82e3/f, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns3/table3_restore/1945f514e609ff061d2c4aee1cdb82e3/recovered.edits] 2016-08-15 14:52:08,421 DEBUG [ProcedureExecutor-7] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns3/table3_restore/1945f514e609ff061d2c4aee1cdb82e3/recovered.edits/4.seqid, to hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/archive/data/ns3/table3_restore/1945f514e609ff061d2c4aee1cdb82e3/recovered.edits/4.seqid 2016-08-15 14:52:08,422 INFO [IPC Server handler 4 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741917_1093 127.0.0.1:55741 2016-08-15 14:52:08,423 DEBUG [ProcedureExecutor-7] backup.HFileArchiver(453): Deleted all region files in: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns3/table3_restore/1945f514e609ff061d2c4aee1cdb82e3 2016-08-15 14:52:08,423 DEBUG [ProcedureExecutor-7] procedure.DeleteTableProcedure(344): Table 'ns3:table3_restore' archived! 2016-08-15 14:52:08,424 INFO [IPC Server handler 0 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741916_1092 127.0.0.1:55741 2016-08-15 14:52:08,544 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741974_1150{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 291 2016-08-15 14:52:08,950 DEBUG [ProcedureExecutor-7] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp/data/ns3/table3_restore/.tabledesc/.tableinfo.0000000001 2016-08-15 14:52:08,952 INFO [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(6162): creating HRegion ns3:table3_restore HTD == 'ns3:table3_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/.tmp Table name == ns3:table3_restore 2016-08-15 14:52:08,961 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741975_1151{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 45 2016-08-15 14:52:09,367 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(736): Instantiated ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:52:09,368 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1419): Closing ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3.: disabling compactions & flushes 2016-08-15 14:52:09,369 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1446): Updates disabled for region ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:52:09,369 INFO [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1552): Closed ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:52:09,480 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3."} 2016-08-15 14:52:09,481 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:52:09,482 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1571): Added 1 2016-08-15 14:52:09,589 INFO [ProcedureExecutor-7] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,55757,1471297725443 2016-08-15 14:52:09,590 ERROR [ProcedureExecutor-7] master.TableStateManager(134): Unable to get table ns3:table3_restore state org.apache.hadoop.hbase.TableNotFoundException: ns3:table3_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:122) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:47) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-15 14:52:09,591 INFO [ProcedureExecutor-7] master.RegionStates(1106): Transition {1945f514e609ff061d2c4aee1cdb82e3 state=OFFLINE, ts=1471297929589, server=null} to {1945f514e609ff061d2c4aee1cdb82e3 state=PENDING_OPEN, ts=1471297929591, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:52:09,591 INFO [ProcedureExecutor-7] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. with state=PENDING_OPEN, sn=10.22.9.171,55757,1471297725443 2016-08-15 14:52:09,592 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:52:09,594 INFO [PriorityRpcServer.handler=3,queue=1,port=55757] regionserver.RSRpcServices(1666): Open ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:52:09,599 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] regionserver.HRegion(6339): Opening region: {ENCODED => 1945f514e609ff061d2c4aee1cdb82e3, NAME => 'ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3.', STARTKEY => '', ENDKEY => ''} 2016-08-15 14:52:09,599 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table3_restore 1945f514e609ff061d2c4aee1cdb82e3 2016-08-15 14:52:09,600 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] regionserver.HRegion(736): Instantiated ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:52:09,603 INFO [StoreOpener-1945f514e609ff061d2c4aee1cdb82e3-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1102696, freeSize=1042859608, maxSize=1043962304, heapSize=1102696, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-15 14:52:09,603 INFO [StoreOpener-1945f514e609ff061d2c4aee1cdb82e3-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-15 14:52:09,604 DEBUG [StoreOpener-1945f514e609ff061d2c4aee1cdb82e3-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns3/table3_restore/1945f514e609ff061d2c4aee1cdb82e3/f 2016-08-15 14:52:09,605 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns3/table3_restore/1945f514e609ff061d2c4aee1cdb82e3 2016-08-15 14:52:09,610 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns3/table3_restore/1945f514e609ff061d2c4aee1cdb82e3/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-15 14:52:09,610 INFO [RS_OPEN_REGION-10.22.9.171:55757-2] regionserver.HRegion(871): Onlined 1945f514e609ff061d2c4aee1cdb82e3; next sequenceid=2 2016-08-15 14:52:09,611 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297811122 2016-08-15 14:52:09,612 INFO [PostOpenDeployTasks:1945f514e609ff061d2c4aee1cdb82e3] regionserver.HRegionServer(1952): Post open deploy tasks for ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:52:09,612 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.AssignmentManager(2884): Got transition OPENED for {1945f514e609ff061d2c4aee1cdb82e3 state=PENDING_OPEN, ts=1471297929591, server=10.22.9.171,55757,1471297725443} from 10.22.9.171,55757,1471297725443 2016-08-15 14:52:09,612 INFO [B.defaultRpcServer.handler=4,queue=0,port=55755] master.RegionStates(1106): Transition {1945f514e609ff061d2c4aee1cdb82e3 state=PENDING_OPEN, ts=1471297929591, server=10.22.9.171,55757,1471297725443} to {1945f514e609ff061d2c4aee1cdb82e3 state=OPEN, ts=1471297929612, server=10.22.9.171,55757,1471297725443} 2016-08-15 14:52:09,613 INFO [B.defaultRpcServer.handler=4,queue=0,port=55755] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. with state=OPEN, openSeqNum=2, server=10.22.9.171,55757,1471297725443 2016-08-15 14:52:09,613 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:52:09,614 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=55755] master.RegionStates(452): Onlined 1945f514e609ff061d2c4aee1cdb82e3 on 10.22.9.171,55757,1471297725443 2016-08-15 14:52:09,614 DEBUG [ProcedureExecutor-7] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,55757,1471297725443 2016-08-15 14:52:09,614 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471297929614,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:table3_restore"} 2016-08-15 14:52:09,614 ERROR [B.defaultRpcServer.handler=4,queue=0,port=55755] master.TableStateManager(134): Unable to get table ns3:table3_restore state org.apache.hadoop.hbase.TableNotFoundException: ns3:table3_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-15 14:52:09,615 DEBUG [PostOpenDeployTasks:1945f514e609ff061d2c4aee1cdb82e3] regionserver.HRegionServer(1979): Finished post open deploy task for ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:52:09,616 DEBUG [RS_OPEN_REGION-10.22.9.171:55757-2] handler.OpenRegionHandler(126): Opened ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. on 10.22.9.171,55757,1471297725443 2016-08-15 14:52:09,616 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:52:09,616 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1700): Updated table ns3:table3_restore state to ENABLED in META 2016-08-15 14:52:09,719 DEBUG [ProcedureExecutor-7] procedure.TruncateTableProcedure(129): truncate 'ns3:table3_restore' completed 2016-08-15 14:52:09,827 DEBUG [ProcedureExecutor-7] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns3:table3_restore/write-master:557550000000002 2016-08-15 14:52:09,828 DEBUG [ProcedureExecutor-7] procedure2.ProcedureExecutor(870): Procedure completed in 1.6460sec: TruncateTableProcedure (table=ns3:table3_restore preserveSplits=true) id=24 owner=tyu state=FINISHED 2016-08-15 14:52:09,943 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755] master.MasterRpcServices(974): Checking to see if procedure is done procId=24 2016-08-15 14:52:09,944 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: TRUNCATE, Table Name: ns3:table3_restore completed 2016-08-15 14:52:09,944 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:52:09,944 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a14002c 2016-08-15 14:52:09,947 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:52:09,948 INFO [main] impl.RestoreClientImpl(284): Restoring 'ns3:test-14712977502232' to 'ns3:table3_restore' from log dirs: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs 2016-08-15 14:52:09,948 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (963108913) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:52:09,948 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56452 because read count=-1. Number of active connections: 11 2016-08-15 14:52:09,948 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56451 because read count=-1. Number of active connections: 11 2016-08-15 14:52:09,948 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (-871199011) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:52:09,949 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x65180046 connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:52:09,952 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x651800460x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:52:09,953 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5d8f0775, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:52:09,953 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:52:09,953 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:52:09,954 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x65180046-0x156902d8a14002d connected 2016-08-15 14:52:09,956 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:52:09,956 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56457; # active connections: 10 2016-08-15 14:52:09,959 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:52:09,960 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56457 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:52:09,960 INFO [main] mapreduce.MapReduceRestoreService(56): Restore incremental backup from directory hdfs://localhost:55740/backupUT/backup_1471297810954/WALs from hbase tables ,ns3:test-14712977502232 to tables ,ns3:table3_restore 2016-08-15 14:52:09,961 INFO [main] mapreduce.MapReduceRestoreService(61): Restore ns3:test-14712977502232 into ns3:table3_restore 2016-08-15 14:52:09,962 DEBUG [main] mapreduce.WALPlayer(299): add incremental job :/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns3-table3_restore-1471297929961 2016-08-15 14:52:09,963 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xd81745a connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:52:09,964 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0xd81745a0x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:52:09,965 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2090f8f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:52:09,965 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:52:09,965 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:52:09,966 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0xd81745a-0x156902d8a14002e connected 2016-08-15 14:52:09,967 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:52:09,967 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56459; # active connections: 11 2016-08-15 14:52:09,968 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:52:09,968 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56459 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:52:09,970 INFO [main] mapreduce.HFileOutputFormat2(478): bulkload locality sensitive enabled 2016-08-15 14:52:09,970 INFO [main] mapreduce.HFileOutputFormat2(483): Looking up current regions for table ns3:test-14712977502232 2016-08-15 14:52:09,973 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:52:09,973 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56460; # active connections: 12 2016-08-15 14:52:09,974 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:52:09,974 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56460 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:52:09,977 INFO [main] mapreduce.HFileOutputFormat2(485): Configuring 1 reduce partitions to match current region count 2016-08-15 14:52:09,977 INFO [main] mapreduce.HFileOutputFormat2(378): Writing partition information to /user/tyu/hbase-staging/partitions_a8ee530f-e919-42e1-b3cd-3e4fea4c8150 2016-08-15 14:52:09,983 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741976_1152{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 153 2016-08-15 14:52:10,391 WARN [main] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-15 14:52:10,743 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@75965af8] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:55741 to delete [blk_1073741916_1092, blk_1073741917_1093] 2016-08-15 14:52:11,143 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-6960760001491149797.jar 2016-08-15 14:52:11,411 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0002_000001 (auth:SIMPLE) 2016-08-15 14:52:12,795 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-15 14:52:20,150 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-2681847680177857793.jar 2016-08-15 14:52:21,754 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-5311817867218385609.jar 2016-08-15 14:52:21,800 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-538873529629753553.jar 2016-08-15 14:52:28,653 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-5520190152356412015.jar 2016-08-15 14:52:28,653 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-15 14:52:28,654 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-15 14:52:28,654 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-15 14:52:28,654 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-15 14:52:28,654 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-15 14:52:28,655 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-15 14:52:28,859 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-2847888872719469572.jar 2016-08-15 14:52:28,860 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-2847888872719469572.jar 2016-08-15 14:52:30,085 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.WALInputFormat, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-3633604484812354139.jar 2016-08-15 14:52:30,086 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-2847888872719469572.jar 2016-08-15 14:52:30,086 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-2847888872719469572.jar 2016-08-15 14:52:30,086 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/hadoop-3633604484812354139.jar 2016-08-15 14:52:30,087 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar 2016-08-15 14:52:30,087 INFO [main] mapreduce.HFileOutputFormat2(498): Incremental table ns3:test-14712977502232 output configured. 2016-08-15 14:52:30,087 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:52:30,087 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a14002e 2016-08-15 14:52:30,088 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:52:30,089 DEBUG [main] mapreduce.WALPlayer(316): success configuring load incremental job 2016-08-15 14:52:30,089 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56460 because read count=-1. Number of active connections: 12 2016-08-15 14:52:30,089 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel$8(566): IPC Client (984587727) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:52:30,089 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56459 because read count=-1. Number of active connections: 12 2016-08-15 14:52:30,089 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel$8(566): IPC Client (-1217035378) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:52:30,089 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.base.Preconditions, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-15 14:52:30,130 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741977_1153{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:52:30,138 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741978_1154{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 0 2016-08-15 14:52:30,145 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741979_1155{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 38156 2016-08-15 14:52:30,557 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741980_1156{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 662656 2016-08-15 14:52:30,978 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741981_1157{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 0 2016-08-15 14:52:30,995 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741982_1158{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:52:31,003 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741983_1159{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 112558 2016-08-15 14:52:31,429 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741984_1160{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 4515909 2016-08-15 14:52:31,859 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741985_1161{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:52:31,872 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741986_1162{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 0 2016-08-15 14:52:31,884 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741987_1163{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:52:31,894 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741988_1164{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 792964 2016-08-15 14:52:32,306 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741989_1165{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 0 2016-08-15 14:52:32,319 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741990_1166{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 1795932 2016-08-15 14:52:32,726 WARN [main] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-15 14:52:32,744 DEBUG [main] mapreduce.WALInputFormat(263): Scanning hdfs://localhost:55740/backupUT/backup_1471297810954/WALs for WAL files 2016-08-15 14:52:32,745 WARN [main] mapreduce.WALInputFormat(286): File hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/.backup.manifest does not appear to be an WAL file. Skipping... 2016-08-15 14:52:32,745 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531 2016-08-15 14:52:32,745 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297731200 2016-08-15 14:52:32,745 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531 2016-08-15 14:52:32,745 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297733730 2016-08-15 14:52:32,745 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297754595 2016-08-15 14:52:32,745 INFO [main] mapreduce.WALInputFormat(278): Found: hdfs://localhost:55740/backupUT/backup_1471297810954/WALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297757231 2016-08-15 14:52:32,752 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741991_1167{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 1237 2016-08-15 14:52:33,170 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741992_1168{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 45 2016-08-15 14:52:33,592 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741993_1169{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 134679 2016-08-15 14:52:34,035 WARN [ResourceManager Event Processor] capacity.LeafQueue(632): maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-15 14:52:34,036 WARN [ResourceManager Event Processor] capacity.LeafQueue(653): maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-15 14:52:34,465 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0003_000001 (auth:SIMPLE) 2016-08-15 14:52:39,472 INFO [Socket Reader #1 for port 55828] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0003_000001 (auth:SIMPLE) 2016-08-15 14:52:39,720 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741994_1170{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:52:41,700 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0003_000001 (auth:SIMPLE) 2016-08-15 14:52:41,700 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0003_000001 (auth:SIMPLE) 2016-08-15 14:52:42,559 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0003_000001 (auth:SIMPLE) 2016-08-15 14:52:42,560 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0003_000001 (auth:SIMPLE) 2016-08-15 14:52:43,566 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0003_000001 (auth:SIMPLE) 2016-08-15 14:52:44,577 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0003_000001 (auth:SIMPLE) 2016-08-15 14:52:47,310 DEBUG [10.22.9.171,55757,1471297725443_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-15 14:52:47,323 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0003_000001 (auth:SIMPLE) 2016-08-15 14:52:47,347 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0003_01_000003 is : 143 2016-08-15 14:52:47,732 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-15 14:52:47,734 INFO [10.22.9.171,55755,1471297724766_ChoreService_1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5f42778b connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:52:47,740 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x5f42778b0x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:52:47,747 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x5f42778b-0x156902d8a14002f connected 2016-08-15 14:52:47,747 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@23e50015, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:52:47,747 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:52:47,747 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:52:47,747 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(580): Has backup sessions from hbase:backup 2016-08-15 14:52:47,750 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:52:47,750 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56535; # active connections: 11 2016-08-15 14:52:47,751 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:52:47,752 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56535 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:52:47,757 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:52:47,757 DEBUG [RpcServer.listener,port=55757] ipc.RpcServer$Listener(880): RpcServer.listener,port=55757: connection from 10.22.9.171:56536; # active connections: 7 2016-08-15 14:52:47,758 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:52:47,758 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56536 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:52:47,762 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297729233 2016-08-15 14:52:47,763 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297729233 2016-08-15 14:52:47,763 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531 2016-08-15 14:52:47,765 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297762531 2016-08-15 14:52:47,765 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297762961 2016-08-15 14:52:47,767 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(80): Didn't find this log in hbase:backup, keeping: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297762961 2016-08-15 14:52:47,767 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297729233 2016-08-15 14:52:47,768 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297729233 2016-08-15 14:52:47,768 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531 2016-08-15 14:52:47,770 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297762531 2016-08-15 14:52:47,770 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297754595 2016-08-15 14:52:47,771 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297754595 2016-08-15 14:52:47,771 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297757231 2016-08-15 14:52:47,773 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297757231 2016-08-15 14:52:47,773 INFO [10.22.9.171,55755,1471297724766_ChoreService_1] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a14002f 2016-08-15 14:52:47,774 DEBUG [10.22.9.171,55755,1471297724766_ChoreService_1] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:52:47,775 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (-588907406) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:52:47,775 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (1693826862) to /10.22.9.171:55757 from tyu: closed 2016-08-15 14:52:47,775 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56535 because read count=-1. Number of active connections: 11 2016-08-15 14:52:47,775 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Listener(912): RpcServer.listener,port=55757: DISCONNECTING client 10.22.9.171:56536 because read count=-1. Number of active connections: 7 2016-08-15 14:52:47,950 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0003_000001 (auth:SIMPLE) 2016-08-15 14:52:47,973 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0003_01_000005 is : 143 2016-08-15 14:52:47,985 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0003_000001 (auth:SIMPLE) 2016-08-15 14:52:48,006 WARN [ContainersLauncher #3] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0003_01_000002 is : 143 2016-08-15 14:52:48,012 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0003_000001 (auth:SIMPLE) 2016-08-15 14:52:48,038 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0003_01_000004 is : 143 2016-08-15 14:52:48,818 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0003_000001 (auth:SIMPLE) 2016-08-15 14:52:48,830 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0003_01_000006 is : 143 2016-08-15 14:52:49,329 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0003_000001 (auth:SIMPLE) 2016-08-15 14:52:49,342 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0003_01_000007 is : 143 2016-08-15 14:52:49,613 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0003_000001 (auth:SIMPLE) 2016-08-15 14:52:52,214 INFO [Socket Reader #1 for port 55836] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0003_000001 (auth:SIMPLE) 2016-08-15 14:52:52,227 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471297749092_0003_01_000008 is : 143 2016-08-15 14:52:52,250 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741995_1171{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 15955 2016-08-15 14:52:52,257 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741996_1172{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:52:52,277 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741997_1173{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 0 2016-08-15 14:52:52,293 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741998_1174{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:52:53,310 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741991_1167 127.0.0.1:55741 2016-08-15 14:52:53,311 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741992_1168 127.0.0.1:55741 2016-08-15 14:52:53,311 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741993_1169 127.0.0.1:55741 2016-08-15 14:52:53,311 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741995_1171 127.0.0.1:55741 2016-08-15 14:52:53,311 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741994_1170 127.0.0.1:55741 2016-08-15 14:52:53,311 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741990_1166 127.0.0.1:55741 2016-08-15 14:52:53,311 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741982_1158 127.0.0.1:55741 2016-08-15 14:52:53,311 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741989_1165 127.0.0.1:55741 2016-08-15 14:52:53,311 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741984_1160 127.0.0.1:55741 2016-08-15 14:52:53,311 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741981_1157 127.0.0.1:55741 2016-08-15 14:52:53,312 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741979_1155 127.0.0.1:55741 2016-08-15 14:52:53,312 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741985_1161 127.0.0.1:55741 2016-08-15 14:52:53,312 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741980_1156 127.0.0.1:55741 2016-08-15 14:52:53,312 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741977_1153 127.0.0.1:55741 2016-08-15 14:52:53,312 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741986_1162 127.0.0.1:55741 2016-08-15 14:52:53,312 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741983_1159 127.0.0.1:55741 2016-08-15 14:52:53,312 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741987_1163 127.0.0.1:55741 2016-08-15 14:52:53,312 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741978_1154 127.0.0.1:55741 2016-08-15 14:52:53,312 INFO [IPC Server handler 9 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741988_1164 127.0.0.1:55741 2016-08-15 14:52:53,513 DEBUG [10.22.9.171,55789,1471297733379_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-15 14:52:53,825 DEBUG [region-location-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/meta/1588230740/info 2016-08-15 14:52:53,825 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/backup/3ca5bb17c6b62ed61d22875df8c133ea/meta 2016-08-15 14:52:53,825 DEBUG [region-location-4] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/namespace/606bab3f14856574a09bb943381ad7b3/info 2016-08-15 14:52:53,826 DEBUG [region-location-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/meta/1588230740/table 2016-08-15 14:52:53,826 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/backup/3ca5bb17c6b62ed61d22875df8c133ea/session 2016-08-15 14:52:54,243 DEBUG [main] mapreduce.MapReduceRestoreService(78): Restoring HFiles from directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns3-table3_restore-1471297929961 2016-08-15 14:52:54,243 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x11e5b5de connecting to ZooKeeper ensemble=localhost:53145 2016-08-15 14:52:54,248 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x11e5b5de0x0, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-15 14:52:54,248 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3de1a9c5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-15 14:52:54,249 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-15 14:52:54,249 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-15 14:52:54,249 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x11e5b5de-0x156902d8a140030 connected 2016-08-15 14:52:54,251 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-15 14:52:54,251 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56562; # active connections: 11 2016-08-15 14:52:54,252 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:52:54,252 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56562 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:52:54,258 DEBUG [main] client.ConnectionImplementation(604): Table ns3:table3_restore should be available 2016-08-15 14:52:54,259 WARN [main] mapreduce.LoadIncrementalHFiles(199): Skipping non-directory hdfs://localhost:55740/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns3-table3_restore-1471297929961/_SUCCESS 2016-08-15 14:52:54,261 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-15 14:52:54,261 DEBUG [RpcServer.listener,port=55755] ipc.RpcServer$Listener(880): RpcServer.listener,port=55755: connection from 10.22.9.171:56563; # active connections: 12 2016-08-15 14:52:54,262 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-15 14:52:54,262 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 56563 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "8c3d4be22608752ba532c1777e2f636a73b475d4" user: "tyu" date: "Mon Aug 15 14:48:26 PDT 2016" src_checksum: "b7603a421924c3ecf5655a4c3c6866d3" version_major: 2 version_minor: 0 2016-08-15 14:52:54,263 WARN [main] mapreduce.LoadIncrementalHFiles(350): Bulk load operation did not find any files to load in directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns3-table3_restore-1471297929961. Does it contain files in subdirectories that correspond to column family names? 2016-08-15 14:52:54,263 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:52:54,263 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140030 2016-08-15 14:52:54,264 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:52:54,265 DEBUG [main] mapreduce.MapReduceRestoreService(90): Restore Job finished:0 2016-08-15 14:52:54,265 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a14002d 2016-08-15 14:52:54,265 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56563 because read count=-1. Number of active connections: 12 2016-08-15 14:52:54,265 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel$8(566): IPC Client (-986838935) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:52:54,265 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56562 because read count=-1. Number of active connections: 12 2016-08-15 14:52:54,265 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel$8(566): IPC Client (968487174) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:52:54,265 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:52:54,266 INFO [main] impl.RestoreClientImpl(292): ns3:test-14712977502232 has been successfully restored to ns3:table3_restore 2016-08-15 14:52:54,266 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-15 14:52:54,266 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471297762157 hdfs://localhost:55740/backupUT/backup_1471297762157/ns3/test-14712977502232/ 2016-08-15 14:52:54,266 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471297810954 hdfs://localhost:55740/backupUT/backup_1471297810954/ns3/test-14712977502232/ 2016-08-15 14:52:54,266 DEBUG [main] impl.RestoreClientImpl(234): restoreStage finished 2016-08-15 14:52:54,266 INFO [main] impl.RestoreClientImpl(108): Restore for [ns1:test-1471297750223, ns2:test-14712977502231, ns3:test-14712977502232] are successful! 2016-08-15 14:52:54,266 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel$8(566): IPC Client (1509859885) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:52:54,266 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56457 because read count=-1. Number of active connections: 10 2016-08-15 14:52:54,352 INFO [main] hbase.ResourceChecker(172): after: backup.TestIncrementalBackup#TestIncBackupRestore Thread=885 (was 794) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1559600332_1 at /127.0.0.1:56102 [Receiving block BP-1170158313-10.22.9.171-1471297719769:blk_1073741887_1063] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: PacketResponder: BP-1170158313-10.22.9.171-1471297719769:blk_1073741888_1064, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1232) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1303) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x2192bac5-shared-pool33-t217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297811122 block BP-1170158313-10.22.9.171-1471297719769:blk_1073741885_1061 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:417) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_250885994_1 at /127.0.0.1:56103 [Receiving block BP-1170158313-10.22.9.171-1471297719769:blk_1073741888_1064] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55757-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: LogDeleter #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1085) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1215994767_1 at /127.0.0.1:56557 [Waiting for operation #3] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x2192bac5-shared-pool33-t218 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: B.defaultRpcServer.handler=2,queue=0,port=55755-SendThread(localhost:53145) sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) Potentially hanging thread: ContainersLauncher #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: MoveIntermediateToDone Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55757-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55757-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: IPC Client (1690770996) connection to /10.22.9.171:56500 from tyu java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:933) org.apache.hadoop.ipc.Client$Connection.run(Client.java:978) Potentially hanging thread: PacketResponder: BP-1170158313-10.22.9.171-1471297719769:blk_1073741886_1062, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1232) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1303) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1559600332_1 at /127.0.0.1:56104 [Receiving block BP-1170158313-10.22.9.171-1471297719769:blk_1073741889_1065] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: Async disk worker #0 for volume /Users/tyu/upstream-backup/hbase-server/target/test-data/154dbba6-092f-4c49-ac4f-8c98ca437cdc/dfscluster_2ab7a416-99ef-4ee2-a636-a71e620e675a/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: rs(10.22.9.171,55755,1471297724766)-backup-pool29-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55757-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: PacketResponder: BP-1170158313-10.22.9.171-1471297719769:blk_1073741885_1061, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1232) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1303) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: (10.22.9.171,55755,1471297724766)-proc-coordinator-pool3-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: rs(10.22.9.171,55757,1471297725443)-backup-pool20-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55793-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: member: '10.22.9.171,55755,1471297724766' subprocedure-pool4-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: rs(10.22.9.171,55757,1471297725443)-backup-pool30-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t15 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_CLOSE_REGION-10.22.9.171:55757-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: PacketResponder: BP-1170158313-10.22.9.171-1471297719769:blk_1073741889_1065, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1232) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1303) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55757-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: LogDeleter #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1085) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55757-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t12 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: Thread-4308 java.io.FileInputStream.readBytes(Native Method) java.io.FileInputStream.read(FileInputStream.java:272) java.io.BufferedInputStream.read1(BufferedInputStream.java:273) java.io.BufferedInputStream.read(BufferedInputStream.java:334) sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) java.io.InputStreamReader.read(InputStreamReader.java:184) java.io.BufferedReader.fill(BufferedReader.java:154) java.io.BufferedReader.readLine(BufferedReader.java:317) java.io.BufferedReader.readLine(BufferedReader.java:382) org.apache.hadoop.util.Shell$1.run(Shell.java:547) Potentially hanging thread: ResponseProcessor for block BP-1170158313-10.22.9.171-1471297719769:blk_1073741890_1066 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:733) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55789-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55789-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: B.defaultRpcServer.handler=3,queue=0,port=55755-SendThread(localhost:53145) sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) Potentially hanging thread: (10.22.9.171,55755,1471297724766)-proc-coordinator-pool8-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55755-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55793-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55755-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: rs(10.22.9.171,55755,1471297724766)-backup-pool19-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: member: '10.22.9.171,55757,1471297725443' subprocedure-pool1-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_CLOSE_REGION-10.22.9.171:55757-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297811542 block BP-1170158313-10.22.9.171-1471297719769:blk_1073741887_1063 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:417) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1559600332_1 at /127.0.0.1:56105 [Receiving block BP-1170158313-10.22.9.171-1471297719769:blk_1073741890_1066] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: member: '10.22.9.171,55757,1471297725443' subprocedure-pool5-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t16 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55789-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1468275884_1 at /127.0.0.1:56564 [Waiting for operation #2] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ResponseProcessor for block BP-1170158313-10.22.9.171-1471297719769:blk_1073741887_1063 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:733) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55755-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55757-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: MASTER_TABLE_OPERATIONS-10.22.9.171:55755-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t11 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55757-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: member: '10.22.9.171,55755,1471297724766' subprocedure-pool2-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297811543 block BP-1170158313-10.22.9.171-1471297719769:blk_1073741888_1064 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:417) Potentially hanging thread: ContainersLauncher #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55789-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: B.defaultRpcServer.handler=3,queue=0,port=55755-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494) Potentially hanging thread: B.defaultRpcServer.handler=2,queue=0,port=55755-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297812387 block BP-1170158313-10.22.9.171-1471297719769:blk_1073741890_1066 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:417) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55755-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1559600332_1 at /127.0.0.1:56100 [Receiving block BP-1170158313-10.22.9.171-1471297719769:blk_1073741885_1061] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: PacketResponder: BP-1170158313-10.22.9.171-1471297719769:blk_1073741890_1066, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1232) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1303) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55789-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ResponseProcessor for block BP-1170158313-10.22.9.171-1471297719769:blk_1073741886_1062 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:733) Potentially hanging thread: ContainersLauncher #3 java.io.FileInputStream.readBytes(Native Method) java.io.FileInputStream.read(FileInputStream.java:272) java.io.BufferedInputStream.read1(BufferedInputStream.java:273) java.io.BufferedInputStream.read(BufferedInputStream.java:334) sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) java.io.InputStreamReader.read(InputStreamReader.java:184) java.io.BufferedReader.fill(BufferedReader.java:154) java.io.BufferedReader.read1(BufferedReader.java:205) java.io.BufferedReader.read(BufferedReader.java:279) org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:786) org.apache.hadoop.util.Shell.runCommand(Shell.java:568) org.apache.hadoop.util.Shell.run(Shell.java:479) org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773) org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) java.util.concurrent.FutureTask.run(FutureTask.java:262) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297811122 block BP-1170158313-10.22.9.171-1471297719769:blk_1073741886_1062 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:417) Potentially hanging thread: PacketResponder: BP-1170158313-10.22.9.171-1471297719769:blk_1073741887_1063, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1232) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1303) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t14 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55755-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ResponseProcessor for block BP-1170158313-10.22.9.171-1471297719769:blk_1073741885_1061 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:733) Potentially hanging thread: DeletionService #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_CLOSE_REGION-10.22.9.171:55757-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t10 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: Async disk worker #0 for volume /Users/tyu/upstream-backup/hbase-server/target/test-data/154dbba6-092f-4c49-ac4f-8c98ca437cdc/dfscluster_2ab7a416-99ef-4ee2-a636-a71e620e675a/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55757-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55757-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ResponseProcessor for block BP-1170158313-10.22.9.171-1471297719769:blk_1073741888_1064 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:733) Potentially hanging thread: AsyncRpcChannel-pool2-t13 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ResponseProcessor for block BP-1170158313-10.22.9.171-1471297719769:blk_1073741889_1065 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:733) Potentially hanging thread: LogDeleter #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1468275884_1 at /127.0.0.1:56565 [Waiting for operation #2] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: MoveIntermediateToDone Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: LogDeleter #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55789-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:55755-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297811964 block BP-1170158313-10.22.9.171-1471297719769:blk_1073741889_1065 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:417) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_250885994_1 at /127.0.0.1:56101 [Receiving block BP-1170158313-10.22.9.171-1471297719769:blk_1073741886_1062] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) - Thread LEAK? -, OpenFileDescriptor=1163 (was 1032) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=10240 (was 10240), SystemLoadAverage=478 (was 202) - SystemLoadAverage LEAK? -, ProcessCount=268 (was 266) - ProcessCount LEAK? -, AvailableMemoryMB=1499 (was 451) - AvailableMemoryMB LEAK? - 2016-08-15 14:52:54,353 WARN [main] hbase.ResourceChecker(135): Thread=885 is superior to 500 2016-08-15 14:52:54,353 WARN [main] hbase.ResourceChecker(135): OpenFileDescriptor=1163 is superior to 1024 2016-08-15 14:52:54,403 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741912_1088 127.0.0.1:55741 2016-08-15 14:52:54,404 INFO [IPC Server handler 8 on 55740] blockmanagement.BlockManager(1115): BLOCK* addToInvalidates: blk_1073741915_1091 127.0.0.1:55741 2016-08-15 14:52:54,404 INFO [main] hbase.HBaseTestingUtility(1142): Shutting down minicluster 2016-08-15 14:52:54,404 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a14000b 2016-08-15 14:52:54,405 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:52:54,405 DEBUG [main] util.JVMClusterUtil(241): Shutting down HBase Cluster 2016-08-15 14:52:54,405 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel$8(566): IPC Client (1217474658) to /10.22.9.171:55789 from tyu: closed 2016-08-15 14:52:54,405 DEBUG [main] coprocessor.CoprocessorHost(271): Stop coprocessor org.apache.hadoop.hbase.backup.master.BackupController 2016-08-15 14:52:54,405 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55789] ipc.RpcServer$Listener(912): RpcServer.listener,port=55789: DISCONNECTING client 10.22.9.171:55813 because read count=-1. Number of active connections: 2 2016-08-15 14:52:54,405 INFO [main] regionserver.HRegionServer(1918): STOPPED: Cluster shutdown requested 2016-08-15 14:52:54,406 INFO [M:0;10.22.9.171:55789] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-08-15 14:52:54,406 INFO [SplitLogWorker-10.22.9.171:55789] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-08-15 14:52:54,406 INFO [M:0;10.22.9.171:55789] regionserver.HeapMemoryManager(202): Stoping HeapMemoryTuner chore. 2016-08-15 14:52:54,406 INFO [SplitLogWorker-10.22.9.171:55789] regionserver.SplitLogWorker(155): SplitLogWorker 10.22.9.171,55789,1471297733379 exiting 2016-08-15 14:52:54,407 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.0 exiting 2016-08-15 14:52:54,407 INFO [M:0;10.22.9.171:55789] procedure2.ProcedureExecutor(532): Stopping the procedure executor 2016-08-15 14:52:54,407 INFO [main] regionserver.HRegionServer(1918): STOPPED: Shutdown requested 2016-08-15 14:52:54,407 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.1 exiting 2016-08-15 14:52:54,407 INFO [RS:0;10.22.9.171:55793] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-08-15 14:52:54,407 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55789-0x156902d8a140006, quorum=localhost:53145, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/running 2016-08-15 14:52:54,407 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:55793-0x156902d8a140007, quorum=localhost:53145, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/running 2016-08-15 14:52:54,407 INFO [M:0;10.22.9.171:55789] wal.WALProcedureStore(232): Stopping the WAL Procedure Store 2016-08-15 14:52:54,407 INFO [RS:0;10.22.9.171:55793] regionserver.HeapMemoryManager(202): Stoping HeapMemoryTuner chore. 2016-08-15 14:52:54,407 INFO [SplitLogWorker-10.22.9.171:55793] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-08-15 14:52:54,408 INFO [SplitLogWorker-10.22.9.171:55793] regionserver.SplitLogWorker(155): SplitLogWorker 10.22.9.171,55793,1471297733428 exiting 2016-08-15 14:52:54,408 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:55789-0x156902d8a140006, quorum=localhost:53145, baseZNode=/2 Set watcher on znode that does not yet exist, /2/running 2016-08-15 14:52:54,408 INFO [RS:0;10.22.9.171:55793] regionserver.LogRollRegionServerProcedureManager(96): Stopping RegionServerBackupManager gracefully. 2016-08-15 14:52:54,408 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.0 exiting 2016-08-15 14:52:54,409 INFO [RS:0;10.22.9.171:55793] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-08-15 14:52:54,408 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.1 exiting 2016-08-15 14:52:54,408 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:55793-0x156902d8a140007, quorum=localhost:53145, baseZNode=/2 Set watcher on znode that does not yet exist, /2/running 2016-08-15 14:52:54,409 INFO [RS:0;10.22.9.171:55793] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-08-15 14:52:54,409 INFO [RS:0;10.22.9.171:55793] regionserver.HRegionServer(1063): stopping server 10.22.9.171,55793,1471297733428 2016-08-15 14:52:54,409 DEBUG [RS:0;10.22.9.171:55793] zookeeper.MetaTableLocator(612): Stopping MetaTableLocator 2016-08-15 14:52:54,409 INFO [RS:0;10.22.9.171:55793] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140009 2016-08-15 14:52:54,409 DEBUG [RS_CLOSE_REGION-10.22.9.171:55793-0] handler.CloseRegionHandler(90): Processing close of hbase:backup,,1471297736292.3ca5bb17c6b62ed61d22875df8c133ea. 2016-08-15 14:52:54,410 DEBUG [RS_CLOSE_REGION-10.22.9.171:55793-0] regionserver.HRegion(1419): Closing hbase:backup,,1471297736292.3ca5bb17c6b62ed61d22875df8c133ea.: disabling compactions & flushes 2016-08-15 14:52:54,410 DEBUG [RS_CLOSE_REGION-10.22.9.171:55793-0] regionserver.HRegion(1446): Updates disabled for region hbase:backup,,1471297736292.3ca5bb17c6b62ed61d22875df8c133ea. 2016-08-15 14:52:54,410 DEBUG [RS:0;10.22.9.171:55793] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:52:54,410 INFO [StoreCloserThread-hbase:backup,,1471297736292.3ca5bb17c6b62ed61d22875df8c133ea.-1] regionserver.HStore(839): Closed meta 2016-08-15 14:52:54,410 INFO [RS:0;10.22.9.171:55793] regionserver.HRegionServer(1292): Waiting on 1 regions to close 2016-08-15 14:52:54,410 DEBUG [RS:0;10.22.9.171:55793] regionserver.HRegionServer(1296): {3ca5bb17c6b62ed61d22875df8c133ea=hbase:backup,,1471297736292.3ca5bb17c6b62ed61d22875df8c133ea.} 2016-08-15 14:52:54,410 INFO [StoreCloserThread-hbase:backup,,1471297736292.3ca5bb17c6b62ed61d22875df8c133ea.-1] regionserver.HStore(839): Closed session 2016-08-15 14:52:54,411 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/WALs/10.22.9.171,55793,1471297733428/10.22.9.171%2C55793%2C1471297733428.regiongroup-1.1471297737100 2016-08-15 14:52:54,417 DEBUG [RS_CLOSE_REGION-10.22.9.171:55793-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/backup/3ca5bb17c6b62ed61d22875df8c133ea/recovered.edits/4.seqid to file, newSeqId=4, maxSeqId=2 2016-08-15 14:52:54,418 INFO [RS_CLOSE_REGION-10.22.9.171:55793-0] regionserver.HRegion(1552): Closed hbase:backup,,1471297736292.3ca5bb17c6b62ed61d22875df8c133ea. 2016-08-15 14:52:54,418 DEBUG [RS_CLOSE_REGION-10.22.9.171:55793-0] handler.CloseRegionHandler(122): Closed hbase:backup,,1471297736292.3ca5bb17c6b62ed61d22875df8c133ea. 2016-08-15 14:52:54,463 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55782 is added to blk_1073741830_1006{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-08860d4b-375c-4c6c-818b-8e98039ea714:NORMAL:127.0.0.1:55782|RBW]]} size 9583 2016-08-15 14:52:54,491 INFO [10.22.9.171,55789,1471297733379_splitLogManager__ChoreService_1] hbase.ScheduledChore(179): Chore: SplitLogManager Timeout Monitor was stopped 2016-08-15 14:52:54,541 INFO [10.22.9.171,55789,1471297733379_ChoreService_1] hbase.ScheduledChore(179): Chore: 10.22.9.171,55789,1471297733379-MemstoreFlusherChore was stopped 2016-08-15 14:52:54,615 INFO [RS:0;10.22.9.171:55793] regionserver.HRegionServer(1091): stopping server 10.22.9.171,55793,1471297733428; all regions closed. 2016-08-15 14:52:54,616 DEBUG [RS:0;10.22.9.171:55793] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/WALs/10.22.9.171,55793,1471297733428 2016-08-15 14:52:54,616 DEBUG [RS:0;10.22.9.171:55793] wal.FSHLog(1090): closing hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/WALs/10.22.9.171,55793,1471297733428/10.22.9.171%2C55793%2C1471297733428.regiongroup-1.1471297737100 2016-08-15 14:52:54,625 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55782 is added to blk_1073741838_1014{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-08860d4b-375c-4c6c-818b-8e98039ea714:NORMAL:127.0.0.1:55782|RBW]]} size 669 2016-08-15 14:52:54,642 INFO [10.22.9.171,55793,1471297733428_ChoreService_1] hbase.ScheduledChore(179): Chore: 10.22.9.171,55793,1471297733428-MemstoreFlusherChore was stopped 2016-08-15 14:52:54,643 INFO [10.22.9.171,55793,1471297733428_ChoreService_1] hbase.ScheduledChore(179): Chore: MovedRegionsCleaner for region 10.22.9.171,55793,1471297733428 was stopped 2016-08-15 14:52:54,643 INFO [10.22.9.171,55793,1471297733428_ChoreService_1] hbase.ScheduledChore(179): Chore: CompactedHFilesCleaner was stopped 2016-08-15 14:52:54,643 DEBUG [10.22.9.171,55793,1471297733428_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-15 14:52:54,866 INFO [M:0;10.22.9.171:55789] regionserver.LogRollRegionServerProcedureManager(96): Stopping RegionServerBackupManager gracefully. 2016-08-15 14:52:54,866 INFO [M:0;10.22.9.171:55789] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-08-15 14:52:54,866 INFO [M:0;10.22.9.171:55789] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-08-15 14:52:54,867 INFO [M:0;10.22.9.171:55789] regionserver.HRegionServer(1063): stopping server 10.22.9.171,55789,1471297733379 2016-08-15 14:52:54,867 DEBUG [M:0;10.22.9.171:55789] zookeeper.MetaTableLocator(612): Stopping MetaTableLocator 2016-08-15 14:52:54,867 DEBUG [RS_CLOSE_REGION-10.22.9.171:55789-0] handler.CloseRegionHandler(90): Processing close of hbase:namespace,,1471297733821.606bab3f14856574a09bb943381ad7b3. 2016-08-15 14:52:54,867 INFO [M:0;10.22.9.171:55789] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140008 2016-08-15 14:52:54,868 DEBUG [RS_CLOSE_REGION-10.22.9.171:55789-0] regionserver.HRegion(1419): Closing hbase:namespace,,1471297733821.606bab3f14856574a09bb943381ad7b3.: disabling compactions & flushes 2016-08-15 14:52:54,868 DEBUG [RS_CLOSE_REGION-10.22.9.171:55789-0] regionserver.HRegion(1446): Updates disabled for region hbase:namespace,,1471297733821.606bab3f14856574a09bb943381ad7b3. 2016-08-15 14:52:54,868 INFO [RS_CLOSE_REGION-10.22.9.171:55789-0] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=344 B 2016-08-15 14:52:54,868 DEBUG [M:0;10.22.9.171:55789] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:52:54,869 INFO [M:0;10.22.9.171:55789] regionserver.CompactSplitThread(403): Waiting for Split Thread to finish... 2016-08-15 14:52:54,869 INFO [M:0;10.22.9.171:55789] regionserver.CompactSplitThread(403): Waiting for Merge Thread to finish... 2016-08-15 14:52:54,869 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55793] ipc.RpcServer$Listener(912): RpcServer.listener,port=55793: DISCONNECTING client 10.22.9.171:55818 because read count=-1. Number of active connections: 1 2016-08-15 14:52:54,869 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel$8(566): IPC Client (-381541266) to /10.22.9.171:55793 from tyu: closed 2016-08-15 14:52:54,869 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/WALs/10.22.9.171,55789,1471297733379/10.22.9.171%2C55789%2C1471297733379.regiongroup-1.1471297734728 2016-08-15 14:52:54,869 INFO [M:0;10.22.9.171:55789] regionserver.CompactSplitThread(403): Waiting for Large Compaction Thread to finish... 2016-08-15 14:52:54,870 INFO [M:0;10.22.9.171:55789] regionserver.CompactSplitThread(403): Waiting for Small Compaction Thread to finish... 2016-08-15 14:52:54,870 INFO [M:0;10.22.9.171:55789] regionserver.HRegionServer(1292): Waiting on 2 regions to close 2016-08-15 14:52:54,870 DEBUG [M:0;10.22.9.171:55789] regionserver.HRegionServer(1296): {606bab3f14856574a09bb943381ad7b3=hbase:namespace,,1471297733821.606bab3f14856574a09bb943381ad7b3., 1588230740=hbase:meta,,1.1588230740} 2016-08-15 14:52:54,870 DEBUG [RS_CLOSE_META-10.22.9.171:55789-0] handler.CloseRegionHandler(90): Processing close of hbase:meta,,1.1588230740 2016-08-15 14:52:54,870 DEBUG [RS_CLOSE_META-10.22.9.171:55789-0] regionserver.HRegion(1419): Closing hbase:meta,,1.1588230740: disabling compactions & flushes 2016-08-15 14:52:54,871 DEBUG [RS_CLOSE_META-10.22.9.171:55789-0] regionserver.HRegion(1446): Updates disabled for region hbase:meta,,1.1588230740 2016-08-15 14:52:54,871 INFO [RS_CLOSE_META-10.22.9.171:55789-0] regionserver.HRegion(2345): Flushing 2/2 column families, memstore=4.02 KB 2016-08-15 14:52:54,871 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/WALs/10.22.9.171,55789,1471297733379.meta/10.22.9.171%2C55789%2C1471297733379.meta.regiongroup-0.1471297733597 2016-08-15 14:52:54,879 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55782 is added to blk_1073741839_1015{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-08860d4b-375c-4c6c-818b-8e98039ea714:NORMAL:127.0.0.1:55782|RBW]]} size 4912 2016-08-15 14:52:54,879 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55782 is added to blk_1073741840_1016{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-08860d4b-375c-4c6c-818b-8e98039ea714:NORMAL:127.0.0.1:55782|RBW]]} size 6350 2016-08-15 14:52:55,029 DEBUG [RS:0;10.22.9.171:55793] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/oldWALs 2016-08-15 14:52:55,030 INFO [RS:0;10.22.9.171:55793] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C55793%2C1471297733428.regiongroup-1:(num 1471297737100) 2016-08-15 14:52:55,030 DEBUG [RS:0;10.22.9.171:55793] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/WALs/10.22.9.171,55793,1471297733428 2016-08-15 14:52:55,030 DEBUG [RS:0;10.22.9.171:55793] wal.FSHLog(1090): closing hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/WALs/10.22.9.171,55793,1471297733428/10.22.9.171%2C55793%2C1471297733428.regiongroup-0.1471297735614 2016-08-15 14:52:55,035 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55782 is added to blk_1073741835_1011{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7cb0cff0-a33a-46b3-ae55-b4ade9920aee:NORMAL:127.0.0.1:55782|RBW]]} size 91 2016-08-15 14:52:55,241 INFO [master//10.22.9.171:0.leaseChecker] regionserver.Leases(146): master//10.22.9.171:0.leaseChecker closing leases 2016-08-15 14:52:55,241 INFO [master//10.22.9.171:0.leaseChecker] regionserver.Leases(149): master//10.22.9.171:0.leaseChecker closed leases 2016-08-15 14:52:55,272 INFO [regionserver//10.22.9.171:0.leaseChecker] regionserver.Leases(146): regionserver//10.22.9.171:0.leaseChecker closing leases 2016-08-15 14:52:55,272 INFO [regionserver//10.22.9.171:0.leaseChecker] regionserver.Leases(149): regionserver//10.22.9.171:0.leaseChecker closed leases 2016-08-15 14:52:55,272 INFO [master//10.22.9.171:0.logRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-15 14:52:55,273 INFO [regionserver//10.22.9.171:0.logRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-15 14:52:55,273 INFO [RS_OPEN_META-10.22.9.171:55789-0-MetaLogRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-15 14:52:55,285 INFO [RS_CLOSE_REGION-10.22.9.171:55789-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=6, memsize=344, hasBloomFilter=true, into tmp file hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/namespace/606bab3f14856574a09bb943381ad7b3/.tmp/fa9754f78f2a49858bf4f9d204f32151 2016-08-15 14:52:55,285 INFO [RS_CLOSE_META-10.22.9.171:55789-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=15, memsize=3.3 K, hasBloomFilter=false, into tmp file hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/meta/1588230740/.tmp/6cb228c9c673418ca9195723247a1207 2016-08-15 14:52:55,295 DEBUG [RS_CLOSE_REGION-10.22.9.171:55789-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/namespace/606bab3f14856574a09bb943381ad7b3/.tmp/fa9754f78f2a49858bf4f9d204f32151 as hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/namespace/606bab3f14856574a09bb943381ad7b3/info/fa9754f78f2a49858bf4f9d204f32151 2016-08-15 14:52:55,301 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55782 is added to blk_1073741841_1017{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-08860d4b-375c-4c6c-818b-8e98039ea714:NORMAL:127.0.0.1:55782|RBW]]} size 4846 2016-08-15 14:52:55,301 INFO [RS_CLOSE_REGION-10.22.9.171:55789-0] regionserver.HStore(934): Added hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/namespace/606bab3f14856574a09bb943381ad7b3/info/fa9754f78f2a49858bf4f9d204f32151, entries=2, sequenceid=6, filesize=4.8 K 2016-08-15 14:52:55,301 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/WALs/10.22.9.171,55789,1471297733379/10.22.9.171%2C55789%2C1471297733379.regiongroup-1.1471297734728 2016-08-15 14:52:55,302 INFO [RS_CLOSE_REGION-10.22.9.171:55789-0] regionserver.HRegion(2545): Finished memstore flush of ~344 B/344, currentsize=0 B/0 for region hbase:namespace,,1471297733821.606bab3f14856574a09bb943381ad7b3. in 434ms, sequenceid=6, compaction requested=false 2016-08-15 14:52:55,303 INFO [StoreCloserThread-hbase:namespace,,1471297733821.606bab3f14856574a09bb943381ad7b3.-1] regionserver.HStore(839): Closed info 2016-08-15 14:52:55,303 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/WALs/10.22.9.171,55789,1471297733379/10.22.9.171%2C55789%2C1471297733379.regiongroup-1.1471297734728 2016-08-15 14:52:55,308 DEBUG [RS_CLOSE_REGION-10.22.9.171:55789-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/namespace/606bab3f14856574a09bb943381ad7b3/recovered.edits/9.seqid to file, newSeqId=9, maxSeqId=2 2016-08-15 14:52:55,309 INFO [RS_CLOSE_REGION-10.22.9.171:55789-0] regionserver.HRegion(1552): Closed hbase:namespace,,1471297733821.606bab3f14856574a09bb943381ad7b3. 2016-08-15 14:52:55,309 DEBUG [RS_CLOSE_REGION-10.22.9.171:55789-0] handler.CloseRegionHandler(122): Closed hbase:namespace,,1471297733821.606bab3f14856574a09bb943381ad7b3. 2016-08-15 14:52:55,442 DEBUG [RS:0;10.22.9.171:55793] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/oldWALs 2016-08-15 14:52:55,442 INFO [RS:0;10.22.9.171:55793] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C55793%2C1471297733428.regiongroup-0:(num 1471297735614) 2016-08-15 14:52:55,442 DEBUG [RS:0;10.22.9.171:55793] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:52:55,442 INFO [RS:0;10.22.9.171:55793] regionserver.Leases(146): RS:0;10.22.9.171:55793 closing leases 2016-08-15 14:52:55,442 INFO [RS:0;10.22.9.171:55793] regionserver.Leases(149): RS:0;10.22.9.171:55793 closed leases 2016-08-15 14:52:55,442 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (1297553401) to /10.22.9.171:55789 from tyu.hfs.1: closed 2016-08-15 14:52:55,442 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55789] ipc.RpcServer$Listener(912): RpcServer.listener,port=55789: DISCONNECTING client 10.22.9.171:55800 because read count=-1. Number of active connections: 1 2016-08-15 14:52:55,443 INFO [RS:0;10.22.9.171:55793] hbase.ChoreService(323): Chore service for: 10.22.9.171,55793,1471297733428 had [[ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS]] on shutdown 2016-08-15 14:52:55,443 INFO [RS:0;10.22.9.171:55793] regionserver.CompactSplitThread(403): Waiting for Split Thread to finish... 2016-08-15 14:52:55,443 INFO [RS:0;10.22.9.171:55793] regionserver.CompactSplitThread(403): Waiting for Merge Thread to finish... 2016-08-15 14:52:55,443 INFO [RS:0;10.22.9.171:55793] regionserver.CompactSplitThread(403): Waiting for Large Compaction Thread to finish... 2016-08-15 14:52:55,443 INFO [RS:0;10.22.9.171:55793] regionserver.CompactSplitThread(403): Waiting for Small Compaction Thread to finish... 2016-08-15 14:52:55,446 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:55793-0x156902d8a140007, quorum=localhost:53145, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/replication/rs/10.22.9.171,55793,1471297733428 2016-08-15 14:52:55,446 INFO [RS:0;10.22.9.171:55793] ipc.RpcServer(2336): Stopping server on 55793 2016-08-15 14:52:55,447 INFO [RpcServer.listener,port=55793] ipc.RpcServer$Listener(816): RpcServer.listener,port=55793: stopping 2016-08-15 14:52:55,447 INFO [RpcServer.responder] ipc.RpcServer$Responder(1059): RpcServer.responder: stopped 2016-08-15 14:52:55,447 INFO [RpcServer.responder] ipc.RpcServer$Responder(962): RpcServer.responder: stopping 2016-08-15 14:52:55,448 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:55793-0x156902d8a140007, quorum=localhost:53145, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/10.22.9.171,55793,1471297733428 2016-08-15 14:52:55,448 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55789-0x156902d8a140006, quorum=localhost:53145, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/10.22.9.171,55793,1471297733428 2016-08-15 14:52:55,448 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:55793-0x156902d8a140007, quorum=localhost:53145, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2016-08-15 14:52:55,448 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.22.9.171,55793,1471297733428] 2016-08-15 14:52:55,450 INFO [main-EventThread] master.ServerManager(609): Cluster shutdown set; 10.22.9.171,55793,1471297733428 expired; onlineServers=1 2016-08-15 14:52:55,450 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55789-0x156902d8a140006, quorum=localhost:53145, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2016-08-15 14:52:55,450 INFO [RS:0;10.22.9.171:55793] regionserver.HRegionServer(1135): stopping server 10.22.9.171,55793,1471297733428; zookeeper connection closed. 2016-08-15 14:52:55,450 INFO [RS:0;10.22.9.171:55793] regionserver.HRegionServer(1138): RS:0;10.22.9.171:55793 exiting 2016-08-15 14:52:55,450 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@533e2b14] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(190): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@533e2b14 2016-08-15 14:52:55,450 INFO [main] util.JVMClusterUtil(317): Shutdown of 1 master(s) and 1 regionserver(s) complete 2016-08-15 14:52:55,710 INFO [RS_CLOSE_META-10.22.9.171:55789-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=15, memsize=704, hasBloomFilter=false, into tmp file hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/meta/1588230740/.tmp/a10f69506c4543638ed485c673bf3629 2016-08-15 14:52:55,719 DEBUG [RS_CLOSE_META-10.22.9.171:55789-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/meta/1588230740/.tmp/6cb228c9c673418ca9195723247a1207 as hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/meta/1588230740/info/6cb228c9c673418ca9195723247a1207 2016-08-15 14:52:55,726 INFO [RS_CLOSE_META-10.22.9.171:55789-0] regionserver.HStore(934): Added hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/meta/1588230740/info/6cb228c9c673418ca9195723247a1207, entries=14, sequenceid=15, filesize=6.2 K 2016-08-15 14:52:55,727 DEBUG [RS_CLOSE_META-10.22.9.171:55789-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/meta/1588230740/.tmp/a10f69506c4543638ed485c673bf3629 as hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/meta/1588230740/table/a10f69506c4543638ed485c673bf3629 2016-08-15 14:52:55,734 INFO [RS_CLOSE_META-10.22.9.171:55789-0] regionserver.HStore(934): Added hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/meta/1588230740/table/a10f69506c4543638ed485c673bf3629, entries=4, sequenceid=15, filesize=4.7 K 2016-08-15 14:52:55,734 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/WALs/10.22.9.171,55789,1471297733379.meta/10.22.9.171%2C55789%2C1471297733379.meta.regiongroup-0.1471297733597 2016-08-15 14:52:55,735 INFO [RS_CLOSE_META-10.22.9.171:55789-0] regionserver.HRegion(2545): Finished memstore flush of ~4.02 KB/4112, currentsize=0 B/0 for region hbase:meta,,1.1588230740 in 864ms, sequenceid=15, compaction requested=false 2016-08-15 14:52:55,736 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed info 2016-08-15 14:52:55,737 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed table 2016-08-15 14:52:55,737 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/WALs/10.22.9.171,55789,1471297733379.meta/10.22.9.171%2C55789%2C1471297733379.meta.regiongroup-0.1471297733597 2016-08-15 14:52:55,741 DEBUG [RS_CLOSE_META-10.22.9.171:55789-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/data/hbase/meta/1588230740/recovered.edits/18.seqid to file, newSeqId=18, maxSeqId=3 2016-08-15 14:52:55,742 DEBUG [RS_CLOSE_META-10.22.9.171:55789-0] coprocessor.CoprocessorHost(271): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2016-08-15 14:52:55,743 INFO [RS_CLOSE_META-10.22.9.171:55789-0] regionserver.HRegion(1552): Closed hbase:meta,,1.1588230740 2016-08-15 14:52:55,743 DEBUG [RS_CLOSE_META-10.22.9.171:55789-0] handler.CloseRegionHandler(122): Closed hbase:meta,,1.1588230740 2016-08-15 14:52:55,775 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@75965af8] blockmanagement.BlockManager(3455): BLOCK* BlockManager: ask 127.0.0.1:55741 to delete [blk_1073741984_1160, blk_1073741985_1161, blk_1073741986_1162, blk_1073741987_1163, blk_1073741988_1164, blk_1073741989_1165, blk_1073741990_1166, blk_1073741991_1167, blk_1073741992_1168, blk_1073741993_1169, blk_1073741994_1170, blk_1073741995_1171, blk_1073741912_1088, blk_1073741977_1153, blk_1073741978_1154, blk_1073741915_1091, blk_1073741979_1155, blk_1073741980_1156, blk_1073741981_1157, blk_1073741982_1158, blk_1073741983_1159] 2016-08-15 14:52:55,885 INFO [M:0;10.22.9.171:55789] regionserver.HRegionServer(1091): stopping server 10.22.9.171,55789,1471297733379; all regions closed. 2016-08-15 14:52:55,885 DEBUG [M:0;10.22.9.171:55789] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/WALs/10.22.9.171,55789,1471297733379.meta 2016-08-15 14:52:55,885 DEBUG [M:0;10.22.9.171:55789] wal.FSHLog(1090): closing hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/WALs/10.22.9.171,55789,1471297733379.meta/10.22.9.171%2C55789%2C1471297733379.meta.regiongroup-0.1471297733597 2016-08-15 14:52:55,891 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55782 is added to blk_1073741829_1005{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7cb0cff0-a33a-46b3-ae55-b4ade9920aee:NORMAL:127.0.0.1:55782|RBW]]} size 83 2016-08-15 14:52:55,896 DEBUG [M:0;10.22.9.171:55789] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/oldWALs 2016-08-15 14:52:55,896 INFO [M:0;10.22.9.171:55789] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C55789%2C1471297733379.meta.regiongroup-0:(num 1471297733597) 2016-08-15 14:52:55,896 DEBUG [M:0;10.22.9.171:55789] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/WALs/10.22.9.171,55789,1471297733379 2016-08-15 14:52:55,896 DEBUG [M:0;10.22.9.171:55789] wal.FSHLog(1090): closing hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/WALs/10.22.9.171,55789,1471297733379/10.22.9.171%2C55789%2C1471297733379.regiongroup-1.1471297734728 2016-08-15 14:52:55,900 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55782 is added to blk_1073741834_1010{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-08860d4b-375c-4c6c-818b-8e98039ea714:NORMAL:127.0.0.1:55782|RBW]]} size 1383 2016-08-15 14:52:56,307 DEBUG [M:0;10.22.9.171:55789] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/oldWALs 2016-08-15 14:52:56,307 INFO [M:0;10.22.9.171:55789] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C55789%2C1471297733379.regiongroup-1:(num 1471297734728) 2016-08-15 14:52:56,308 DEBUG [M:0;10.22.9.171:55789] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/WALs/10.22.9.171,55789,1471297733379 2016-08-15 14:52:56,308 DEBUG [M:0;10.22.9.171:55789] wal.FSHLog(1090): closing hdfs://localhost:55781/user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/WALs/10.22.9.171,55789,1471297733379/10.22.9.171%2C55789%2C1471297733379.regiongroup-0.1471297734593 2016-08-15 14:52:56,314 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55782 is added to blk_1073741833_1009{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7cb0cff0-a33a-46b3-ae55-b4ade9920aee:NORMAL:127.0.0.1:55782|RBW]]} size 91 2016-08-15 14:52:56,720 DEBUG [M:0;10.22.9.171:55789] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/a90aa79b-59b9-4196-9974-381cf7cac851/oldWALs 2016-08-15 14:52:56,720 INFO [M:0;10.22.9.171:55789] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C55789%2C1471297733379.regiongroup-0:(num 1471297734593) 2016-08-15 14:52:56,720 DEBUG [M:0;10.22.9.171:55789] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:52:56,720 INFO [M:0;10.22.9.171:55789] regionserver.Leases(146): M:0;10.22.9.171:55789 closing leases 2016-08-15 14:52:56,720 INFO [M:0;10.22.9.171:55789] regionserver.Leases(149): M:0;10.22.9.171:55789 closed leases 2016-08-15 14:52:56,720 INFO [M:0;10.22.9.171:55789] hbase.ChoreService(323): Chore service for: 10.22.9.171,55789,1471297733379 had [[ScheduledChore: Name: 10.22.9.171,55789,1471297733379-ExpiredMobFileCleanerChore Period: 86400 Unit: SECONDS], [ScheduledChore: Name: 10.22.9.171,55789,1471297733379-RegionNormalizerChore Period: 1800000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,55789,1471297733379-BalancerChore Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,55789,1471297733379-MobCompactionChore Period: 604800 Unit: SECONDS], [ScheduledChore: Name: CatalogJanitor-10.22.9.171:55789 Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.22.9.171,55789,1471297733379 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: LogsCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: HFileCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,55789,1471297733379-ClusterStatusChore Period: 60000 Unit: MILLISECONDS]] on shutdown 2016-08-15 14:52:56,725 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55789-0x156902d8a140006, quorum=localhost:53145, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/replication/rs/10.22.9.171,55789,1471297733379 2016-08-15 14:52:56,725 INFO [M:0;10.22.9.171:55789] master.MasterMobCompactionThread(175): Waiting for Mob Compaction Thread to finish... 2016-08-15 14:52:56,726 INFO [M:0;10.22.9.171:55789] master.MasterMobCompactionThread(175): Waiting for Region Server Mob Compaction Thread to finish... 2016-08-15 14:52:56,726 DEBUG [M:0;10.22.9.171:55789] master.HMaster(1127): Stopping service threads 2016-08-15 14:52:56,727 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55789-0x156902d8a140006, quorum=localhost:53145, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/master 2016-08-15 14:52:56,727 INFO [M:0;10.22.9.171:55789] hbase.ChoreService(323): Chore service for: 10.22.9.171,55789,1471297733379_splitLogManager_ had [] on shutdown 2016-08-15 14:52:56,727 INFO [M:0;10.22.9.171:55789] master.LogRollMasterProcedureManager(55): stop: server shutting down. 2016-08-15 14:52:56,727 INFO [M:0;10.22.9.171:55789] flush.MasterFlushTableProcedureManager(78): stop: server shutting down. 2016-08-15 14:52:56,728 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:55789-0x156902d8a140006, quorum=localhost:53145, baseZNode=/2 Set watcher on znode that does not yet exist, /2/master 2016-08-15 14:52:56,728 INFO [M:0;10.22.9.171:55789] ipc.RpcServer(2336): Stopping server on 55789 2016-08-15 14:52:56,728 INFO [RpcServer.listener,port=55789] ipc.RpcServer$Listener(816): RpcServer.listener,port=55789: stopping 2016-08-15 14:52:56,728 INFO [RpcServer.responder] ipc.RpcServer$Responder(1059): RpcServer.responder: stopped 2016-08-15 14:52:56,729 INFO [RpcServer.responder] ipc.RpcServer$Responder(962): RpcServer.responder: stopping 2016-08-15 14:52:56,730 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55789-0x156902d8a140006, quorum=localhost:53145, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/10.22.9.171,55789,1471297733379 2016-08-15 14:52:56,730 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.22.9.171,55789,1471297733379] 2016-08-15 14:52:56,731 INFO [M:0;10.22.9.171:55789] regionserver.HRegionServer(1135): stopping server 10.22.9.171,55789,1471297733379; zookeeper connection closed. 2016-08-15 14:52:56,731 INFO [M:0;10.22.9.171:55789] regionserver.HRegionServer(1138): M:0;10.22.9.171:55789 exiting 2016-08-15 14:52:56,731 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-08-15 14:52:56,740 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-08-15 14:52:56,849 WARN [DataNode: [[[DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/dfscluster_c943125c-d7f8-4d66-96e5-e11664e833bd/dfs/data/data1/, [DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/dfscluster_c943125c-d7f8-4d66-96e5-e11664e833bd/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:55781] datanode.BPServiceActor(704): BPOfferService for Block pool BP-1520443837-10.22.9.171-1471297732921 (Datanode Uuid af38b42d-c6d1-48a8-8d75-0a4c8011c006) service to localhost/127.0.0.1:55781 interrupted 2016-08-15 14:52:56,849 WARN [DataNode: [[[DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/dfscluster_c943125c-d7f8-4d66-96e5-e11664e833bd/dfs/data/data1/, [DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/71f3442a-7ed8-4457-aee0-165e59088eaf/dfscluster_c943125c-d7f8-4d66-96e5-e11664e833bd/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:55781] datanode.BPServiceActor(835): Ending block pool service for: Block pool BP-1520443837-10.22.9.171-1471297732921 (Datanode Uuid af38b42d-c6d1-48a8-8d75-0a4c8011c006) service to localhost/127.0.0.1:55781 2016-08-15 14:52:56,908 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-08-15 14:52:57,034 INFO [main] hbase.HBaseTestingUtility(1155): Minicluster is down 2016-08-15 14:52:57,034 INFO [main] hbase.HBaseTestingUtility(1142): Shutting down minicluster 2016-08-15 14:52:57,034 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-15 14:52:57,034 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140005 2016-08-15 14:52:57,037 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:52:57,037 DEBUG [main] util.JVMClusterUtil(241): Shutting down HBase Cluster 2016-08-15 14:52:57,037 DEBUG [main] coprocessor.CoprocessorHost(271): Stop coprocessor org.apache.hadoop.hbase.backup.master.BackupController 2016-08-15 14:52:57,037 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:55778 because read count=-1. Number of active connections: 9 2016-08-15 14:52:57,038 INFO [main] regionserver.HRegionServer(1918): STOPPED: Cluster shutdown requested 2016-08-15 14:52:57,037 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:55842 because read count=-1. Number of active connections: 9 2016-08-15 14:52:57,037 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (-1000837068) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:52:57,037 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (1346934676) to /10.22.9.171:55755 from tyu: closed 2016-08-15 14:52:57,038 INFO [M:0;10.22.9.171:55755] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-08-15 14:52:57,039 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Listener(912): RpcServer.listener,port=55757: DISCONNECTING client 10.22.9.171:56091 because read count=-1. Number of active connections: 6 2016-08-15 14:52:57,039 INFO [M:0;10.22.9.171:55755] regionserver.HeapMemoryManager(202): Stoping HeapMemoryTuner chore. 2016-08-15 14:52:57,039 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (-618672721) to /10.22.9.171:55757 from tyu: closed 2016-08-15 14:52:57,039 INFO [M:0;10.22.9.171:55755] procedure2.ProcedureExecutor(532): Stopping the procedure executor 2016-08-15 14:52:57,039 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.1 exiting 2016-08-15 14:52:57,039 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:55757-0x156902d8a140001, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-08-15 14:52:57,039 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.0 exiting 2016-08-15 14:52:57,039 INFO [SplitLogWorker-10.22.9.171:55755] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-08-15 14:52:57,040 INFO [main] regionserver.HRegionServer(1918): STOPPED: Shutdown requested 2016-08-15 14:52:57,039 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-08-15 14:52:57,039 INFO [M:0;10.22.9.171:55755] wal.WALProcedureStore(232): Stopping the WAL Procedure Store 2016-08-15 14:52:57,040 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:55757-0x156902d8a140001, quorum=localhost:53145, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-08-15 14:52:57,040 INFO [RS:0;10.22.9.171:55757] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-08-15 14:52:57,040 INFO [SplitLogWorker-10.22.9.171:55755] regionserver.SplitLogWorker(155): SplitLogWorker 10.22.9.171,55755,1471297724766 exiting 2016-08-15 14:52:57,040 INFO [RS:0;10.22.9.171:55757] regionserver.HeapMemoryManager(202): Stoping HeapMemoryTuner chore. 2016-08-15 14:52:57,040 INFO [SplitLogWorker-10.22.9.171:55757] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-08-15 14:52:57,040 INFO [RS:0;10.22.9.171:55757] regionserver.LogRollRegionServerProcedureManager(96): Stopping RegionServerBackupManager gracefully. 2016-08-15 14:52:57,040 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-08-15 14:52:57,041 INFO [RS:0;10.22.9.171:55757] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-08-15 14:52:57,041 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.0 exiting 2016-08-15 14:52:57,041 INFO [SplitLogWorker-10.22.9.171:55757] regionserver.SplitLogWorker(155): SplitLogWorker 10.22.9.171,55757,1471297725443 exiting 2016-08-15 14:52:57,040 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.1 exiting 2016-08-15 14:52:57,041 INFO [RS:0;10.22.9.171:55757] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-08-15 14:52:57,042 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-0] handler.CloseRegionHandler(90): Processing close of ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:52:57,042 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] handler.CloseRegionHandler(90): Processing close of ns1:test-1471297750223,,1471297753292.d0d5e63c01f66001cc1c60dbba147803. 2016-08-15 14:52:57,042 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-1] handler.CloseRegionHandler(90): Processing close of ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd. 2016-08-15 14:52:57,043 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1419): Closing ns1:test-1471297750223,,1471297753292.d0d5e63c01f66001cc1c60dbba147803.: disabling compactions & flushes 2016-08-15 14:52:57,042 INFO [RS:0;10.22.9.171:55757] regionserver.HRegionServer(1063): stopping server 10.22.9.171,55757,1471297725443 2016-08-15 14:52:57,043 DEBUG [RS:0;10.22.9.171:55757] zookeeper.MetaTableLocator(612): Stopping MetaTableLocator 2016-08-15 14:52:57,043 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1446): Updates disabled for region ns1:test-1471297750223,,1471297753292.d0d5e63c01f66001cc1c60dbba147803. 2016-08-15 14:52:57,043 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-1] regionserver.HRegion(1419): Closing ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd.: disabling compactions & flushes 2016-08-15 14:52:57,043 INFO [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=32.65 KB 2016-08-15 14:52:57,042 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-0] regionserver.HRegion(1419): Closing ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3.: disabling compactions & flushes 2016-08-15 14:52:57,043 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-1] regionserver.HRegion(1446): Updates disabled for region ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd. 2016-08-15 14:52:57,043 INFO [RS:0;10.22.9.171:55757] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140003 2016-08-15 14:52:57,043 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-0] regionserver.HRegion(1446): Updates disabled for region ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:52:57,044 INFO [StoreCloserThread-ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd.-1] regionserver.HStore(839): Closed f 2016-08-15 14:52:57,044 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297811964 2016-08-15 14:52:57,044 INFO [StoreCloserThread-ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3.-1] regionserver.HStore(839): Closed f 2016-08-15 14:52:57,044 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297812387 2016-08-15 14:52:57,044 DEBUG [RS:0;10.22.9.171:55757] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:52:57,044 INFO [RS:0;10.22.9.171:55757] regionserver.HRegionServer(1292): Waiting on 9 regions to close 2016-08-15 14:52:57,044 DEBUG [RS:0;10.22.9.171:55757] regionserver.HRegionServer(1296): {1945f514e609ff061d2c4aee1cdb82e3=ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3., 6cdb399964f82b5b2b7ceb6977686dfd=ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd., d0d5e63c01f66001cc1c60dbba147803=ns1:test-1471297750223,,1471297753292.d0d5e63c01f66001cc1c60dbba147803., 2ac6cbe281fdb4f0f9c1edc2931c4a3e=hbase:backup,,1471297732810.2ac6cbe281fdb4f0f9c1edc2931c4a3e., 7ac1188f2e9c4e31e67f0d3df5f7670d=ns2:test-14712977502231,,1471297755947.7ac1188f2e9c4e31e67f0d3df5f7670d., 423858dc52aa47bb136b344dffa37b24=ns4:test-14712977502233,,1471297759756.423858dc52aa47bb136b344dffa37b24., 0b36fb2815988fac833cbf3ff5af4331=ns3:test-14712977502232,,1471297758521.0b36fb2815988fac833cbf3ff5af4331., 1a2af1efddb74842cc0d4b4b051d5478=ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478., 398ca33ca6e640575cac0c2baa029825=ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825.} 2016-08-15 14:52:57,044 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297811122 2016-08-15 14:52:57,044 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel$8(566): IPC Client (378424864) to /10.22.9.171:55755 from tyu.hfs.0: closed 2016-08-15 14:52:57,044 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:56043 because read count=-1. Number of active connections: 7 2016-08-15 14:52:57,052 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns4/table4_restore/6cdb399964f82b5b2b7ceb6977686dfd/recovered.edits/4.seqid to file, newSeqId=4, maxSeqId=2 2016-08-15 14:52:57,052 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns3/table3_restore/1945f514e609ff061d2c4aee1cdb82e3/recovered.edits/4.seqid to file, newSeqId=4, maxSeqId=2 2016-08-15 14:52:57,053 INFO [RS_CLOSE_REGION-10.22.9.171:55757-1] regionserver.HRegion(1552): Closed ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd. 2016-08-15 14:52:57,053 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-1] handler.CloseRegionHandler(122): Closed ns4:table4_restore,,1471297836727.6cdb399964f82b5b2b7ceb6977686dfd. 2016-08-15 14:52:57,053 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-1] handler.CloseRegionHandler(90): Processing close of hbase:backup,,1471297732810.2ac6cbe281fdb4f0f9c1edc2931c4a3e. 2016-08-15 14:52:57,054 INFO [RS_CLOSE_REGION-10.22.9.171:55757-0] regionserver.HRegion(1552): Closed ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:52:57,054 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-1] regionserver.HRegion(1419): Closing hbase:backup,,1471297732810.2ac6cbe281fdb4f0f9c1edc2931c4a3e.: disabling compactions & flushes 2016-08-15 14:52:57,054 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-0] handler.CloseRegionHandler(122): Closed ns3:table3_restore,,1471297834465.1945f514e609ff061d2c4aee1cdb82e3. 2016-08-15 14:52:57,054 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-1] regionserver.HRegion(1446): Updates disabled for region hbase:backup,,1471297732810.2ac6cbe281fdb4f0f9c1edc2931c4a3e. 2016-08-15 14:52:57,054 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-0] handler.CloseRegionHandler(90): Processing close of ns2:test-14712977502231,,1471297755947.7ac1188f2e9c4e31e67f0d3df5f7670d. 2016-08-15 14:52:57,054 INFO [RS_CLOSE_REGION-10.22.9.171:55757-1] regionserver.HRegion(2345): Flushing 2/2 column families, memstore=15.48 KB 2016-08-15 14:52:57,054 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-0] regionserver.HRegion(1419): Closing ns2:test-14712977502231,,1471297755947.7ac1188f2e9c4e31e67f0d3df5f7670d.: disabling compactions & flushes 2016-08-15 14:52:57,054 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-0] regionserver.HRegion(1446): Updates disabled for region ns2:test-14712977502231,,1471297755947.7ac1188f2e9c4e31e67f0d3df5f7670d. 2016-08-15 14:52:57,055 INFO [RS_CLOSE_REGION-10.22.9.171:55757-0] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=840 B 2016-08-15 14:52:57,055 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297812387 2016-08-15 14:52:57,055 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297811542 2016-08-15 14:52:57,064 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741999_1175{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 5016 2016-08-15 14:52:57,065 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073742000_1176{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 10505 2016-08-15 14:52:57,066 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073742001_1177{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 0 2016-08-15 14:52:57,067 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741832_1008{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 465 2016-08-15 14:52:57,067 INFO [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=405, memsize=32.6 K, hasBloomFilter=true, into tmp file hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/test-1471297750223/d0d5e63c01f66001cc1c60dbba147803/.tmp/17e0e33e52e84355b58b60eccef94bb2 2016-08-15 14:52:57,067 INFO [M:0;10.22.9.171:55755] regionserver.LogRollRegionServerProcedureManager(96): Stopping RegionServerBackupManager gracefully. 2016-08-15 14:52:57,067 INFO [M:0;10.22.9.171:55755] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-08-15 14:52:57,067 INFO [M:0;10.22.9.171:55755] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-08-15 14:52:57,067 INFO [M:0;10.22.9.171:55755] regionserver.HRegionServer(1063): stopping server 10.22.9.171,55755,1471297724766 2016-08-15 14:52:57,067 DEBUG [M:0;10.22.9.171:55755] zookeeper.MetaTableLocator(612): Stopping MetaTableLocator 2016-08-15 14:52:57,067 DEBUG [RS_CLOSE_REGION-10.22.9.171:55755-0] handler.CloseRegionHandler(90): Processing close of hbase:namespace,,1471297729981.06988d6fe8c0dfd28a742a1975b79cc9. 2016-08-15 14:52:57,068 INFO [M:0;10.22.9.171:55755] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x156902d8a140002 2016-08-15 14:52:57,068 DEBUG [RS_CLOSE_REGION-10.22.9.171:55755-0] regionserver.HRegion(1419): Closing hbase:namespace,,1471297729981.06988d6fe8c0dfd28a742a1975b79cc9.: disabling compactions & flushes 2016-08-15 14:52:57,068 DEBUG [RS_CLOSE_REGION-10.22.9.171:55755-0] regionserver.HRegion(1446): Updates disabled for region hbase:namespace,,1471297729981.06988d6fe8c0dfd28a742a1975b79cc9. 2016-08-15 14:52:57,068 INFO [RS_CLOSE_REGION-10.22.9.171:55755-0] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=1016 B 2016-08-15 14:52:57,069 DEBUG [M:0;10.22.9.171:55755] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:52:57,071 INFO [M:0;10.22.9.171:55755] regionserver.CompactSplitThread(403): Waiting for Split Thread to finish... 2016-08-15 14:52:57,071 INFO [M:0;10.22.9.171:55755] regionserver.CompactSplitThread(403): Waiting for Merge Thread to finish... 2016-08-15 14:52:57,071 INFO [M:0;10.22.9.171:55755] regionserver.CompactSplitThread(403): Waiting for Large Compaction Thread to finish... 2016-08-15 14:52:57,071 INFO [M:0;10.22.9.171:55755] regionserver.CompactSplitThread(403): Waiting for Small Compaction Thread to finish... 2016-08-15 14:52:57,071 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel$8(566): IPC Client (-647659552) to /10.22.9.171:55757 from tyu: closed 2016-08-15 14:52:57,071 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Listener(912): RpcServer.listener,port=55757: DISCONNECTING client 10.22.9.171:55803 because read count=-1. Number of active connections: 5 2016-08-15 14:52:57,071 INFO [M:0;10.22.9.171:55755] regionserver.HRegionServer(1292): Waiting on 2 regions to close 2016-08-15 14:52:57,072 DEBUG [M:0;10.22.9.171:55755] regionserver.HRegionServer(1296): {06988d6fe8c0dfd28a742a1975b79cc9=hbase:namespace,,1471297729981.06988d6fe8c0dfd28a742a1975b79cc9., 1588230740=hbase:meta,,1.1588230740} 2016-08-15 14:52:57,071 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel$8(566): IPC Client (2060836757) to /10.22.9.171:55757 from tyu: closed 2016-08-15 14:52:57,071 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297811543 2016-08-15 14:52:57,072 DEBUG [RS_CLOSE_META-10.22.9.171:55755-0] handler.CloseRegionHandler(90): Processing close of hbase:meta,,1.1588230740 2016-08-15 14:52:57,071 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55757] ipc.RpcServer$Listener(912): RpcServer.listener,port=55757: DISCONNECTING client 10.22.9.171:56040 because read count=-1. Number of active connections: 4 2016-08-15 14:52:57,073 DEBUG [RS_CLOSE_META-10.22.9.171:55755-0] regionserver.HRegion(1419): Closing hbase:meta,,1.1588230740: disabling compactions & flushes 2016-08-15 14:52:57,073 DEBUG [RS_CLOSE_META-10.22.9.171:55755-0] regionserver.HRegion(1446): Updates disabled for region hbase:meta,,1.1588230740 2016-08-15 14:52:57,073 INFO [RS_CLOSE_META-10.22.9.171:55755-0] regionserver.HRegion(2345): Flushing 2/2 column families, memstore=28.55 KB 2016-08-15 14:52:57,074 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:52:57,078 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/test-1471297750223/d0d5e63c01f66001cc1c60dbba147803/.tmp/17e0e33e52e84355b58b60eccef94bb2 as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/test-1471297750223/d0d5e63c01f66001cc1c60dbba147803/f/17e0e33e52e84355b58b60eccef94bb2 2016-08-15 14:52:57,082 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073742002_1178{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 0 2016-08-15 14:52:57,083 INFO [RS_CLOSE_REGION-10.22.9.171:55755-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=10, memsize=1016, hasBloomFilter=true, into tmp file hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/namespace/06988d6fe8c0dfd28a742a1975b79cc9/.tmp/34f2b46450fa464fbccebd12fa576e88 2016-08-15 14:52:57,086 INFO [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HStore(934): Added hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/test-1471297750223/d0d5e63c01f66001cc1c60dbba147803/f/17e0e33e52e84355b58b60eccef94bb2, entries=199, sequenceid=405, filesize=12.7 K 2016-08-15 14:52:57,086 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297811964 2016-08-15 14:52:57,086 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073742003_1179{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 0 2016-08-15 14:52:57,087 INFO [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(2545): Finished memstore flush of ~32.65 KB/33432, currentsize=0 B/0 for region ns1:test-1471297750223,,1471297753292.d0d5e63c01f66001cc1c60dbba147803. in 44ms, sequenceid=405, compaction requested=false 2016-08-15 14:52:57,087 INFO [RS_CLOSE_META-10.22.9.171:55755-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=77, memsize=24.3 K, hasBloomFilter=false, into tmp file hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/meta/1588230740/.tmp/515bb07afb3549aebbb51039fe912262 2016-08-15 14:52:57,088 INFO [StoreCloserThread-ns1:test-1471297750223,,1471297753292.d0d5e63c01f66001cc1c60dbba147803.-1] regionserver.HStore(839): Closed f 2016-08-15 14:52:57,089 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297811964 2016-08-15 14:52:57,092 DEBUG [RS_CLOSE_REGION-10.22.9.171:55755-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/namespace/06988d6fe8c0dfd28a742a1975b79cc9/.tmp/34f2b46450fa464fbccebd12fa576e88 as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/namespace/06988d6fe8c0dfd28a742a1975b79cc9/info/34f2b46450fa464fbccebd12fa576e88 2016-08-15 14:52:57,093 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/test-1471297750223/d0d5e63c01f66001cc1c60dbba147803/recovered.edits/408.seqid to file, newSeqId=408, maxSeqId=2 2016-08-15 14:52:57,095 INFO [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1552): Closed ns1:test-1471297750223,,1471297753292.d0d5e63c01f66001cc1c60dbba147803. 2016-08-15 14:52:57,095 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] handler.CloseRegionHandler(122): Closed ns1:test-1471297750223,,1471297753292.d0d5e63c01f66001cc1c60dbba147803. 2016-08-15 14:52:57,095 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] handler.CloseRegionHandler(90): Processing close of ns4:test-14712977502233,,1471297759756.423858dc52aa47bb136b344dffa37b24. 2016-08-15 14:52:57,095 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1419): Closing ns4:test-14712977502233,,1471297759756.423858dc52aa47bb136b344dffa37b24.: disabling compactions & flushes 2016-08-15 14:52:57,095 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1446): Updates disabled for region ns4:test-14712977502233,,1471297759756.423858dc52aa47bb136b344dffa37b24. 2016-08-15 14:52:57,096 INFO [StoreCloserThread-ns4:test-14712977502233,,1471297759756.423858dc52aa47bb136b344dffa37b24.-1] regionserver.HStore(839): Closed f 2016-08-15 14:52:57,096 INFO [RS_CLOSE_META-10.22.9.171:55755-0] regionserver.StoreFile$Reader(1606): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 515bb07afb3549aebbb51039fe912262 2016-08-15 14:52:57,096 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297812387 2016-08-15 14:52:57,100 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns4/test-14712977502233/423858dc52aa47bb136b344dffa37b24/recovered.edits/5.seqid to file, newSeqId=5, maxSeqId=2 2016-08-15 14:52:57,101 INFO [RS_CLOSE_REGION-10.22.9.171:55755-0] regionserver.HStore(934): Added hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/namespace/06988d6fe8c0dfd28a742a1975b79cc9/info/34f2b46450fa464fbccebd12fa576e88, entries=6, sequenceid=10, filesize=4.9 K 2016-08-15 14:52:57,101 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297811543 2016-08-15 14:52:57,102 INFO [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1552): Closed ns4:test-14712977502233,,1471297759756.423858dc52aa47bb136b344dffa37b24. 2016-08-15 14:52:57,102 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] handler.CloseRegionHandler(122): Closed ns4:test-14712977502233,,1471297759756.423858dc52aa47bb136b344dffa37b24. 2016-08-15 14:52:57,102 INFO [RS_CLOSE_REGION-10.22.9.171:55755-0] regionserver.HRegion(2545): Finished memstore flush of ~1016 B/1016, currentsize=0 B/0 for region hbase:namespace,,1471297729981.06988d6fe8c0dfd28a742a1975b79cc9. in 34ms, sequenceid=10, compaction requested=false 2016-08-15 14:52:57,102 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] handler.CloseRegionHandler(90): Processing close of ns3:test-14712977502232,,1471297758521.0b36fb2815988fac833cbf3ff5af4331. 2016-08-15 14:52:57,102 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1419): Closing ns3:test-14712977502232,,1471297758521.0b36fb2815988fac833cbf3ff5af4331.: disabling compactions & flushes 2016-08-15 14:52:57,102 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1446): Updates disabled for region ns3:test-14712977502232,,1471297758521.0b36fb2815988fac833cbf3ff5af4331. 2016-08-15 14:52:57,103 INFO [StoreCloserThread-ns3:test-14712977502232,,1471297758521.0b36fb2815988fac833cbf3ff5af4331.-1] regionserver.HStore(839): Closed f 2016-08-15 14:52:57,103 INFO [StoreCloserThread-hbase:namespace,,1471297729981.06988d6fe8c0dfd28a742a1975b79cc9.-1] regionserver.HStore(839): Closed info 2016-08-15 14:52:57,104 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297811122 2016-08-15 14:52:57,104 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297811543 2016-08-15 14:52:57,104 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073742004_1180{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|FINALIZED]]} size 0 2016-08-15 14:52:57,105 INFO [RS_CLOSE_META-10.22.9.171:55755-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=77, memsize=4.3 K, hasBloomFilter=false, into tmp file hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/meta/1588230740/.tmp/cd879332b14d40cf855b7c05264d05cc 2016-08-15 14:52:57,106 INFO [10.22.9.171,55757,1471297725443_ChoreService_1] hbase.ScheduledChore(179): Chore: 10.22.9.171,55757,1471297725443-MemstoreFlusherChore was stopped 2016-08-15 14:52:57,108 DEBUG [RS_CLOSE_REGION-10.22.9.171:55755-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/namespace/06988d6fe8c0dfd28a742a1975b79cc9/recovered.edits/13.seqid to file, newSeqId=13, maxSeqId=2 2016-08-15 14:52:57,108 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns3/test-14712977502232/0b36fb2815988fac833cbf3ff5af4331/recovered.edits/5.seqid to file, newSeqId=5, maxSeqId=2 2016-08-15 14:52:57,109 INFO [RS_CLOSE_REGION-10.22.9.171:55755-0] regionserver.HRegion(1552): Closed hbase:namespace,,1471297729981.06988d6fe8c0dfd28a742a1975b79cc9. 2016-08-15 14:52:57,110 DEBUG [RS_CLOSE_REGION-10.22.9.171:55755-0] handler.CloseRegionHandler(122): Closed hbase:namespace,,1471297729981.06988d6fe8c0dfd28a742a1975b79cc9. 2016-08-15 14:52:57,110 INFO [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1552): Closed ns3:test-14712977502232,,1471297758521.0b36fb2815988fac833cbf3ff5af4331. 2016-08-15 14:52:57,110 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] handler.CloseRegionHandler(122): Closed ns3:test-14712977502232,,1471297758521.0b36fb2815988fac833cbf3ff5af4331. 2016-08-15 14:52:57,110 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] handler.CloseRegionHandler(90): Processing close of ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:52:57,110 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1419): Closing ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478.: disabling compactions & flushes 2016-08-15 14:52:57,110 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1446): Updates disabled for region ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:52:57,111 INFO [StoreCloserThread-ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478.-1] regionserver.HStore(839): Closed f 2016-08-15 14:52:57,111 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297811964 2016-08-15 14:52:57,114 DEBUG [RS_CLOSE_META-10.22.9.171:55755-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/meta/1588230740/.tmp/515bb07afb3549aebbb51039fe912262 as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/meta/1588230740/info/515bb07afb3549aebbb51039fe912262 2016-08-15 14:52:57,116 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns1/table1_restore/1a2af1efddb74842cc0d4b4b051d5478/recovered.edits/8.seqid to file, newSeqId=8, maxSeqId=2 2016-08-15 14:52:57,117 INFO [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1552): Closed ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:52:57,117 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] handler.CloseRegionHandler(122): Closed ns1:table1_restore,,1471297829305.1a2af1efddb74842cc0d4b4b051d5478. 2016-08-15 14:52:57,117 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] handler.CloseRegionHandler(90): Processing close of ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:52:57,117 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1419): Closing ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825.: disabling compactions & flushes 2016-08-15 14:52:57,117 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1446): Updates disabled for region ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:52:57,119 INFO [StoreCloserThread-ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825.-1] regionserver.HStore(839): Closed f 2016-08-15 14:52:57,119 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297811542 2016-08-15 14:52:57,122 INFO [RS_CLOSE_META-10.22.9.171:55755-0] regionserver.StoreFile$Reader(1606): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 515bb07afb3549aebbb51039fe912262 2016-08-15 14:52:57,122 INFO [RS_CLOSE_META-10.22.9.171:55755-0] regionserver.HStore(934): Added hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/meta/1588230740/info/515bb07afb3549aebbb51039fe912262, entries=100, sequenceid=77, filesize=16.5 K 2016-08-15 14:52:57,122 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/table2_restore/398ca33ca6e640575cac0c2baa029825/recovered.edits/8.seqid to file, newSeqId=8, maxSeqId=2 2016-08-15 14:52:57,123 DEBUG [RS_CLOSE_META-10.22.9.171:55755-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/meta/1588230740/.tmp/cd879332b14d40cf855b7c05264d05cc as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/meta/1588230740/table/cd879332b14d40cf855b7c05264d05cc 2016-08-15 14:52:57,123 INFO [RS_CLOSE_REGION-10.22.9.171:55757-2] regionserver.HRegion(1552): Closed ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:52:57,123 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-2] handler.CloseRegionHandler(122): Closed ns2:table2_restore,,1471297832110.398ca33ca6e640575cac0c2baa029825. 2016-08-15 14:52:57,128 INFO [RS_CLOSE_META-10.22.9.171:55755-0] regionserver.HStore(934): Added hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/meta/1588230740/table/cd879332b14d40cf855b7c05264d05cc, entries=24, sequenceid=77, filesize=5.7 K 2016-08-15 14:52:57,128 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:52:57,129 INFO [RS_CLOSE_META-10.22.9.171:55755-0] regionserver.HRegion(2545): Finished memstore flush of ~28.55 KB/29232, currentsize=0 B/0 for region hbase:meta,,1.1588230740 in 56ms, sequenceid=77, compaction requested=false 2016-08-15 14:52:57,130 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed info 2016-08-15 14:52:57,132 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed table 2016-08-15 14:52:57,132 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:52:57,136 DEBUG [RS_CLOSE_META-10.22.9.171:55755-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/meta/1588230740/recovered.edits/80.seqid to file, newSeqId=80, maxSeqId=3 2016-08-15 14:52:57,137 DEBUG [RS_CLOSE_META-10.22.9.171:55755-0] coprocessor.CoprocessorHost(271): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2016-08-15 14:52:57,137 INFO [RS_CLOSE_META-10.22.9.171:55755-0] regionserver.HRegion(1552): Closed hbase:meta,,1.1588230740 2016-08-15 14:52:57,138 DEBUG [RS_CLOSE_META-10.22.9.171:55755-0] handler.CloseRegionHandler(122): Closed hbase:meta,,1.1588230740 2016-08-15 14:52:57,156 INFO [regionserver//10.22.9.171:0.logRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-15 14:52:57,272 INFO [M:0;10.22.9.171:55755] regionserver.HRegionServer(1091): stopping server 10.22.9.171,55755,1471297724766; all regions closed. 2016-08-15 14:52:57,273 DEBUG [M:0;10.22.9.171:55755] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta 2016-08-15 14:52:57,273 DEBUG [M:0;10.22.9.171:55755] wal.FSHLog(1090): closing hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766.meta/10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0.1471297728443 2016-08-15 14:52:57,276 INFO [RS_OPEN_META-10.22.9.171:55755-0-MetaLogRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-15 14:52:57,285 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741829_1005{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 16774 2016-08-15 14:52:57,365 INFO [master//10.22.9.171:0.logRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-15 14:52:57,468 INFO [RS_CLOSE_REGION-10.22.9.171:55757-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=211, memsize=840, hasBloomFilter=true, into tmp file hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/test-14712977502231/7ac1188f2e9c4e31e67f0d3df5f7670d/.tmp/29cdb471ead049f0b1f9a71f9d4de423 2016-08-15 14:52:57,468 INFO [RS_CLOSE_REGION-10.22.9.171:55757-1] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=21, memsize=11.8 K, hasBloomFilter=true, into tmp file hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/backup/2ac6cbe281fdb4f0f9c1edc2931c4a3e/.tmp/b84cd1a9fbc640cebb3920151cc6ff5c 2016-08-15 14:52:57,477 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/test-14712977502231/7ac1188f2e9c4e31e67f0d3df5f7670d/.tmp/29cdb471ead049f0b1f9a71f9d4de423 as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/test-14712977502231/7ac1188f2e9c4e31e67f0d3df5f7670d/f/29cdb471ead049f0b1f9a71f9d4de423 2016-08-15 14:52:57,484 INFO [RS_CLOSE_REGION-10.22.9.171:55757-0] regionserver.HStore(934): Added hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/test-14712977502231/7ac1188f2e9c4e31e67f0d3df5f7670d/f/29cdb471ead049f0b1f9a71f9d4de423, entries=5, sequenceid=211, filesize=4.9 K 2016-08-15 14:52:57,484 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073742005_1181{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 6315 2016-08-15 14:52:57,485 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297811542 2016-08-15 14:52:57,485 INFO [RS_CLOSE_REGION-10.22.9.171:55757-0] regionserver.HRegion(2545): Finished memstore flush of ~840 B/840, currentsize=0 B/0 for region ns2:test-14712977502231,,1471297755947.7ac1188f2e9c4e31e67f0d3df5f7670d. in 430ms, sequenceid=211, compaction requested=false 2016-08-15 14:52:57,487 INFO [StoreCloserThread-ns2:test-14712977502231,,1471297755947.7ac1188f2e9c4e31e67f0d3df5f7670d.-1] regionserver.HStore(839): Closed f 2016-08-15 14:52:57,487 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297811542 2016-08-15 14:52:57,491 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/ns2/test-14712977502231/7ac1188f2e9c4e31e67f0d3df5f7670d/recovered.edits/214.seqid to file, newSeqId=214, maxSeqId=2 2016-08-15 14:52:57,492 INFO [RS_CLOSE_REGION-10.22.9.171:55757-0] regionserver.HRegion(1552): Closed ns2:test-14712977502231,,1471297755947.7ac1188f2e9c4e31e67f0d3df5f7670d. 2016-08-15 14:52:57,492 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-0] handler.CloseRegionHandler(122): Closed ns2:test-14712977502231,,1471297755947.7ac1188f2e9c4e31e67f0d3df5f7670d. 2016-08-15 14:52:57,550 INFO [regionserver//10.22.9.171:0.leaseChecker] regionserver.Leases(146): regionserver//10.22.9.171:0.leaseChecker closing leases 2016-08-15 14:52:57,550 INFO [master//10.22.9.171:0.leaseChecker] regionserver.Leases(146): master//10.22.9.171:0.leaseChecker closing leases 2016-08-15 14:52:57,551 INFO [master//10.22.9.171:0.leaseChecker] regionserver.Leases(149): master//10.22.9.171:0.leaseChecker closed leases 2016-08-15 14:52:57,551 INFO [regionserver//10.22.9.171:0.leaseChecker] regionserver.Leases(149): regionserver//10.22.9.171:0.leaseChecker closed leases 2016-08-15 14:52:57,693 DEBUG [M:0;10.22.9.171:55755] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs 2016-08-15 14:52:57,693 INFO [M:0;10.22.9.171:55755] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C55755%2C1471297724766.meta.regiongroup-0:(num 1471297728443) 2016-08-15 14:52:57,693 DEBUG [M:0;10.22.9.171:55755] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766 2016-08-15 14:52:57,693 DEBUG [M:0;10.22.9.171:55755] wal.FSHLog(1090): closing hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-1.1471297811543 2016-08-15 14:52:57,697 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741888_1064{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 893 2016-08-15 14:52:57,719 INFO [10.22.9.171,55755,1471297724766_splitLogManager__ChoreService_1] hbase.ScheduledChore(179): Chore: SplitLogManager Timeout Monitor was stopped 2016-08-15 14:52:57,888 INFO [RS_CLOSE_REGION-10.22.9.171:55757-1] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=21, memsize=3.7 K, hasBloomFilter=true, into tmp file hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/backup/2ac6cbe281fdb4f0f9c1edc2931c4a3e/.tmp/528b519205584a729b774176455ecb74 2016-08-15 14:52:57,897 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-1] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/backup/2ac6cbe281fdb4f0f9c1edc2931c4a3e/.tmp/b84cd1a9fbc640cebb3920151cc6ff5c as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/backup/2ac6cbe281fdb4f0f9c1edc2931c4a3e/meta/b84cd1a9fbc640cebb3920151cc6ff5c 2016-08-15 14:52:57,904 INFO [RS_CLOSE_REGION-10.22.9.171:55757-1] regionserver.HStore(934): Added hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/backup/2ac6cbe281fdb4f0f9c1edc2931c4a3e/meta/b84cd1a9fbc640cebb3920151cc6ff5c, entries=35, sequenceid=21, filesize=10.3 K 2016-08-15 14:52:57,905 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-1] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/backup/2ac6cbe281fdb4f0f9c1edc2931c4a3e/.tmp/528b519205584a729b774176455ecb74 as hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/backup/2ac6cbe281fdb4f0f9c1edc2931c4a3e/session/528b519205584a729b774176455ecb74 2016-08-15 14:52:57,911 INFO [RS_CLOSE_REGION-10.22.9.171:55757-1] regionserver.HStore(934): Added hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/backup/2ac6cbe281fdb4f0f9c1edc2931c4a3e/session/528b519205584a729b774176455ecb74, entries=2, sequenceid=21, filesize=6.2 K 2016-08-15 14:52:57,911 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297812387 2016-08-15 14:52:57,912 INFO [RS_CLOSE_REGION-10.22.9.171:55757-1] regionserver.HRegion(2545): Finished memstore flush of ~15.48 KB/15848, currentsize=0 B/0 for region hbase:backup,,1471297732810.2ac6cbe281fdb4f0f9c1edc2931c4a3e. in 858ms, sequenceid=21, compaction requested=false 2016-08-15 14:52:57,913 INFO [StoreCloserThread-hbase:backup,,1471297732810.2ac6cbe281fdb4f0f9c1edc2931c4a3e.-1] regionserver.HStore(839): Closed meta 2016-08-15 14:52:57,914 INFO [StoreCloserThread-hbase:backup,,1471297732810.2ac6cbe281fdb4f0f9c1edc2931c4a3e.-1] regionserver.HStore(839): Closed session 2016-08-15 14:52:57,914 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297812387 2016-08-15 14:52:57,920 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/data/hbase/backup/2ac6cbe281fdb4f0f9c1edc2931c4a3e/recovered.edits/24.seqid to file, newSeqId=24, maxSeqId=2 2016-08-15 14:52:57,921 INFO [RS_CLOSE_REGION-10.22.9.171:55757-1] regionserver.HRegion(1552): Closed hbase:backup,,1471297732810.2ac6cbe281fdb4f0f9c1edc2931c4a3e. 2016-08-15 14:52:57,922 DEBUG [RS_CLOSE_REGION-10.22.9.171:55757-1] handler.CloseRegionHandler(122): Closed hbase:backup,,1471297732810.2ac6cbe281fdb4f0f9c1edc2931c4a3e. 2016-08-15 14:52:58,041 INFO [10.22.9.171,55755,1471297724766_ChoreService_1] hbase.ScheduledChore(179): Chore: 10.22.9.171,55755,1471297724766-MemstoreFlusherChore was stopped 2016-08-15 14:52:58,059 INFO [RS:0;10.22.9.171:55757] regionserver.HRegionServer(1091): stopping server 10.22.9.171,55757,1471297725443; all regions closed. 2016-08-15 14:52:58,059 DEBUG [RS:0;10.22.9.171:55757] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443 2016-08-15 14:52:58,059 DEBUG [RS:0;10.22.9.171:55757] wal.FSHLog(1090): closing hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-0.1471297811122 2016-08-15 14:52:58,065 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741885_1061{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 1511 2016-08-15 14:52:58,105 DEBUG [M:0;10.22.9.171:55755] wal.FSHLog(1045): Moved 2 WAL file(s) to /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs 2016-08-15 14:52:58,105 INFO [M:0;10.22.9.171:55755] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C55755%2C1471297724766.regiongroup-1:(num 1471297811543) 2016-08-15 14:52:58,105 DEBUG [M:0;10.22.9.171:55755] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766 2016-08-15 14:52:58,105 DEBUG [M:0;10.22.9.171:55755] wal.FSHLog(1090): closing hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55755,1471297724766/10.22.9.171%2C55755%2C1471297724766.regiongroup-0.1471297811122 2016-08-15 14:52:58,111 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741886_1062{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 91 2016-08-15 14:52:58,472 DEBUG [RS:0;10.22.9.171:55757] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs 2016-08-15 14:52:58,472 INFO [RS:0;10.22.9.171:55757] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C55757%2C1471297725443.regiongroup-0:(num 1471297811122) 2016-08-15 14:52:58,472 DEBUG [RS:0;10.22.9.171:55757] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443 2016-08-15 14:52:58,473 DEBUG [RS:0;10.22.9.171:55757] wal.FSHLog(1090): closing hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-2.1471297811964 2016-08-15 14:52:58,476 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741889_1065{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 2749 2016-08-15 14:52:58,484 INFO [Socket Reader #1 for port 55832] ipc.Server$Connection(1316): Auth successful for appattempt_1471297749092_0003_000001 (auth:SIMPLE) 2016-08-15 14:52:58,517 DEBUG [M:0;10.22.9.171:55755] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs 2016-08-15 14:52:58,517 INFO [M:0;10.22.9.171:55755] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C55755%2C1471297724766.regiongroup-0:(num 1471297811122) 2016-08-15 14:52:58,517 DEBUG [M:0;10.22.9.171:55755] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:52:58,517 INFO [M:0;10.22.9.171:55755] regionserver.Leases(146): M:0;10.22.9.171:55755 closing leases 2016-08-15 14:52:58,517 INFO [M:0;10.22.9.171:55755] regionserver.Leases(149): M:0;10.22.9.171:55755 closed leases 2016-08-15 14:52:58,517 INFO [M:0;10.22.9.171:55755] hbase.ChoreService(323): Chore service for: 10.22.9.171,55755,1471297724766 had [[ScheduledChore: Name: HFileCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,55755,1471297724766-RegionNormalizerChore Period: 1800000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.22.9.171,55755,1471297724766 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CatalogJanitor-10.22.9.171:55755 Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,55755,1471297724766-MobCompactionChore Period: 604800 Unit: SECONDS], [ScheduledChore: Name: 10.22.9.171,55755,1471297724766-ClusterStatusChore Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,55755,1471297724766-ExpiredMobFileCleanerChore Period: 86400 Unit: SECONDS], [ScheduledChore: Name: 10.22.9.171,55755,1471297724766-BalancerChore Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: LogsCleaner Period: 60000 Unit: MILLISECONDS]] on shutdown 2016-08-15 14:52:58,521 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/replication/rs/10.22.9.171,55755,1471297724766 2016-08-15 14:52:58,521 INFO [M:0;10.22.9.171:55755] master.MasterMobCompactionThread(175): Waiting for Mob Compaction Thread to finish... 2016-08-15 14:52:58,522 INFO [M:0;10.22.9.171:55755] master.MasterMobCompactionThread(175): Waiting for Region Server Mob Compaction Thread to finish... 2016-08-15 14:52:58,522 INFO [M:0;10.22.9.171:55755] master.ServerManager(554): Waiting on regionserver(s) to go down 10.22.9.171,55755,1471297724766, 10.22.9.171,55757,1471297725443 2016-08-15 14:52:58,887 DEBUG [RS:0;10.22.9.171:55757] wal.FSHLog(1045): Moved 2 WAL file(s) to /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs 2016-08-15 14:52:58,888 INFO [RS:0;10.22.9.171:55757] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C55757%2C1471297725443.regiongroup-2:(num 1471297811964) 2016-08-15 14:52:58,888 DEBUG [RS:0;10.22.9.171:55757] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443 2016-08-15 14:52:58,888 DEBUG [RS:0;10.22.9.171:55757] wal.FSHLog(1090): closing hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-1.1471297812387 2016-08-15 14:52:58,900 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741890_1066{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-819f00bd-3da7-408b-80eb-45941a953c1d:NORMAL:127.0.0.1:55741|RBW]]} size 7385 2016-08-15 14:52:59,313 DEBUG [RS:0;10.22.9.171:55757] wal.FSHLog(1045): Moved 3 WAL file(s) to /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs 2016-08-15 14:52:59,313 INFO [RS:0;10.22.9.171:55757] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C55757%2C1471297725443.regiongroup-1:(num 1471297812387) 2016-08-15 14:52:59,313 DEBUG [RS:0;10.22.9.171:55757] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443 2016-08-15 14:52:59,313 DEBUG [RS:0;10.22.9.171:55757] wal.FSHLog(1090): closing hdfs://localhost:55740/user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/WALs/10.22.9.171,55757,1471297725443/10.22.9.171%2C55757%2C1471297725443.regiongroup-3.1471297811542 2016-08-15 14:52:59,318 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:55741 is added to blk_1073741887_1063{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-377a7900-cf79-42da-ad74-017ae4581cbb:NORMAL:127.0.0.1:55741|RBW]]} size 2758 2016-08-15 14:52:59,567 INFO [M:0;10.22.9.171:55755] master.ServerManager(554): Waiting on regionserver(s) to go down 10.22.9.171,55755,1471297724766, 10.22.9.171,55757,1471297725443 2016-08-15 14:52:59,727 DEBUG [RS:0;10.22.9.171:55757] wal.FSHLog(1045): Moved 2 WAL file(s) to /user/tyu/test-data/adee3504-45e2-49c7-b960-e36724cc46d8/oldWALs 2016-08-15 14:52:59,727 INFO [RS:0;10.22.9.171:55757] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C55757%2C1471297725443.regiongroup-3:(num 1471297811542) 2016-08-15 14:52:59,727 DEBUG [RS:0;10.22.9.171:55757] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-15 14:52:59,727 INFO [RS:0;10.22.9.171:55757] regionserver.Leases(146): RS:0;10.22.9.171:55757 closing leases 2016-08-15 14:52:59,727 INFO [RS:0;10.22.9.171:55757] regionserver.Leases(149): RS:0;10.22.9.171:55757 closed leases 2016-08-15 14:52:59,727 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=55755] ipc.RpcServer$Listener(912): RpcServer.listener,port=55755: DISCONNECTING client 10.22.9.171:55766 because read count=-1. Number of active connections: 6 2016-08-15 14:52:59,727 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (-1971526703) to /10.22.9.171:55755 from tyu.hfs.0: closed 2016-08-15 14:52:59,728 INFO [RS:0;10.22.9.171:55757] hbase.ChoreService(323): Chore service for: 10.22.9.171,55757,1471297725443 had [[ScheduledChore: Name: MovedRegionsCleaner for region 10.22.9.171,55757,1471297725443 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS]] on shutdown 2016-08-15 14:52:59,728 INFO [RS:0;10.22.9.171:55757] regionserver.CompactSplitThread(403): Waiting for Split Thread to finish... 2016-08-15 14:52:59,728 INFO [RS:0;10.22.9.171:55757] regionserver.CompactSplitThread(403): Waiting for Merge Thread to finish... 2016-08-15 14:52:59,728 INFO [RS:0;10.22.9.171:55757] regionserver.CompactSplitThread(403): Waiting for Large Compaction Thread to finish... 2016-08-15 14:52:59,728 INFO [RS:0;10.22.9.171:55757] regionserver.CompactSplitThread(403): Waiting for Small Compaction Thread to finish... 2016-08-15 14:52:59,731 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:55757-0x156902d8a140001, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/replication/rs/10.22.9.171,55757,1471297725443 2016-08-15 14:52:59,731 INFO [RS:0;10.22.9.171:55757] ipc.RpcServer(2336): Stopping server on 55757 2016-08-15 14:52:59,732 INFO [RpcServer.listener,port=55757] ipc.RpcServer$Listener(816): RpcServer.listener,port=55757: stopping 2016-08-15 14:52:59,732 INFO [RpcServer.responder] ipc.RpcServer$Responder(1059): RpcServer.responder: stopped 2016-08-15 14:52:59,732 INFO [RpcServer.responder] ipc.RpcServer$Responder(962): RpcServer.responder: stopping 2016-08-15 14:52:59,733 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:55757-0x156902d8a140001, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.22.9.171,55757,1471297725443 2016-08-15 14:52:59,733 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.22.9.171,55757,1471297725443 2016-08-15 14:52:59,733 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:55757-0x156902d8a140001, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-08-15 14:52:59,734 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.22.9.171,55757,1471297725443] 2016-08-15 14:52:59,736 INFO [main-EventThread] master.ServerManager(609): Cluster shutdown set; 10.22.9.171,55757,1471297725443 expired; onlineServers=1 2016-08-15 14:52:59,736 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-08-15 14:52:59,736 INFO [RS:0;10.22.9.171:55757] regionserver.HRegionServer(1135): stopping server 10.22.9.171,55757,1471297725443; zookeeper connection closed. 2016-08-15 14:52:59,736 INFO [RS:0;10.22.9.171:55757] regionserver.HRegionServer(1138): RS:0;10.22.9.171:55757 exiting 2016-08-15 14:52:59,737 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6c05f4e5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(190): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6c05f4e5 2016-08-15 14:52:59,737 INFO [M:0;10.22.9.171:55755] master.ServerManager(562): ZK shows there is only the master self online, exiting now 2016-08-15 14:52:59,737 DEBUG [M:0;10.22.9.171:55755] master.HMaster(1127): Stopping service threads 2016-08-15 14:52:59,737 INFO [main] util.JVMClusterUtil(317): Shutdown of 1 master(s) and 1 regionserver(s) complete 2016-08-15 14:52:59,738 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/master 2016-08-15 14:52:59,738 INFO [M:0;10.22.9.171:55755] hbase.ChoreService(323): Chore service for: 10.22.9.171,55755,1471297724766_splitLogManager_ had [] on shutdown 2016-08-15 14:52:59,738 INFO [M:0;10.22.9.171:55755] master.LogRollMasterProcedureManager(55): stop: server shutting down. 2016-08-15 14:52:59,738 INFO [M:0;10.22.9.171:55755] flush.MasterFlushTableProcedureManager(78): stop: server shutting down. 2016-08-15 14:52:59,738 INFO [M:0;10.22.9.171:55755] ipc.RpcServer(2336): Stopping server on 55755 2016-08-15 14:52:59,738 INFO [RpcServer.listener,port=55755] ipc.RpcServer$Listener(816): RpcServer.listener,port=55755: stopping 2016-08-15 14:52:59,738 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Set watcher on znode that does not yet exist, /1/master 2016-08-15 14:52:59,738 INFO [RpcServer.responder] ipc.RpcServer$Responder(1059): RpcServer.responder: stopped 2016-08-15 14:52:59,739 INFO [RpcServer.responder] ipc.RpcServer$Responder(962): RpcServer.responder: stopping 2016-08-15 14:52:59,739 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:55755-0x156902d8a140000, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.22.9.171,55755,1471297724766 2016-08-15 14:52:59,740 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.22.9.171,55755,1471297724766] 2016-08-15 14:52:59,740 INFO [M:0;10.22.9.171:55755] regionserver.HRegionServer(1135): stopping server 10.22.9.171,55755,1471297724766; zookeeper connection closed. 2016-08-15 14:52:59,740 INFO [M:0;10.22.9.171:55755] regionserver.HRegionServer(1138): M:0;10.22.9.171:55755 exiting 2016-08-15 14:52:59,744 INFO [main] zookeeper.MiniZooKeeperCluster(319): Shutdown MiniZK cluster with all ZK servers 2016-08-15 14:52:59,744 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-08-15 14:52:59,750 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-08-15 14:52:59,845 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x1b0e93c3-0x156902d8a140010, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-15 14:52:59,845 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x2192bac5-0x156902d8a14000d, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-15 14:52:59,845 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(679): hconnection-0x2192bac5-0x156902d8a14000d, quorum=localhost:53145, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-08-15 14:52:59,845 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=55755-EventThread] zookeeper.ZooKeeperWatcher(679): hconnection-0x1b0e93c3-0x156902d8a140010, quorum=localhost:53145, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-08-15 14:52:59,845 DEBUG [10.22.9.171:55755.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(590): replicationLogCleaner-0x156902d8a140004, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-15 14:52:59,845 DEBUG [10.22.9.171:55755.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(679): replicationLogCleaner-0x156902d8a140004, quorum=localhost:53145, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-08-15 14:52:59,845 DEBUG [10.22.9.171:55789.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(590): replicationLogCleaner-0x156902d8a14000a, quorum=localhost:53145, baseZNode=/2 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-15 14:52:59,846 DEBUG [10.22.9.171:55789.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(679): replicationLogCleaner-0x156902d8a14000a, quorum=localhost:53145, baseZNode=/2 Received Disconnected from ZooKeeper, ignoring 2016-08-15 14:52:59,845 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x35bc8e9c-0x156902d8a14000e, quorum=localhost:53145, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-15 14:52:59,846 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=55755-EventThread] zookeeper.ZooKeeperWatcher(679): hconnection-0x35bc8e9c-0x156902d8a14000e, quorum=localhost:53145, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-08-15 14:52:59,857 WARN [DataNode: [[[DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/154dbba6-092f-4c49-ac4f-8c98ca437cdc/dfscluster_2ab7a416-99ef-4ee2-a636-a71e620e675a/dfs/data/data1/, [DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/154dbba6-092f-4c49-ac4f-8c98ca437cdc/dfscluster_2ab7a416-99ef-4ee2-a636-a71e620e675a/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:55740] datanode.BPServiceActor(704): BPOfferService for Block pool BP-1170158313-10.22.9.171-1471297719769 (Datanode Uuid 20894bb9-bf9e-47fe-b50b-674d1111f581) service to localhost/127.0.0.1:55740 interrupted 2016-08-15 14:52:59,857 WARN [DataNode: [[[DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/154dbba6-092f-4c49-ac4f-8c98ca437cdc/dfscluster_2ab7a416-99ef-4ee2-a636-a71e620e675a/dfs/data/data1/, [DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/154dbba6-092f-4c49-ac4f-8c98ca437cdc/dfscluster_2ab7a416-99ef-4ee2-a636-a71e620e675a/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:55740] datanode.BPServiceActor(835): Ending block pool service for: Block pool BP-1170158313-10.22.9.171-1471297719769 (Datanode Uuid 20894bb9-bf9e-47fe-b50b-674d1111f581) service to localhost/127.0.0.1:55740 2016-08-15 14:52:59,932 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-08-15 14:52:59,968 INFO [main] hbase.HBaseTestingUtility(1155): Minicluster is down 2016-08-15 14:52:59,968 INFO [main] hbase.HBaseTestingUtility(2498): Stopping mini mapreduce cluster... 2016-08-15 14:52:59,971 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:0 2016-08-15 14:53:00,236 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-15 14:53:14,004 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:0 2016-08-15 14:53:28,138 ERROR [Thread[Thread-636,5,main]] delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover(659): ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2016-08-15 14:53:28,139 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:0 2016-08-15 14:53:28,248 WARN [ApplicationMaster Launcher] amlauncher.ApplicationMasterLauncher$LauncherThread(122): org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher$LauncherThread interrupted. Returning. 2016-08-15 14:53:28,252 ERROR [ResourceManager Event Processor] resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor(672): Returning, interrupted : java.lang.InterruptedException 2016-08-15 14:53:28,253 ERROR [Thread[Thread-467,5,main]] delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover(659): ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2016-08-15 14:53:28,257 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:0 2016-08-15 14:53:28,365 ERROR [Thread[Thread-447,5,main]] delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover(659): ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2016-08-15 14:53:28,365 INFO [main] hbase.HBaseTestingUtility(2501): Mini mapreduce cluster stopped 2016-08-15 14:53:28,373 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@6390403a 2016-08-15 14:53:28,373 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished. 2016-08-15 14:53:28,373 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@6390403a 2016-08-15 14:53:28,373 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished. 2016-08-15 14:53:28,373 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@6390403a 2016-08-15 14:53:28,373 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished. 2016-08-15 14:53:28,373 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@6390403a 2016-08-15 14:53:28,373 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(120): Starting fs shutdown hook thread. 2016-08-15 14:53:28,381 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished.