2016-08-18 15:24:56,365 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(168): Deleting the snapshot snapshot_1471559071463_ns2_test-14715590609531 for backup backup_1471559069358 succeeded. 2016-08-18 15:24:56,365 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(159): Trying to delete snapshot: snapshot_1471559070320_ns1_test-1471559060953 2016-08-18 15:24:56,368 DEBUG [ProcedureExecutor-4] snapshot.SnapshotManager(289): Deleting snapshot: snapshot_1471559070320_ns1_test-1471559060953 2016-08-18 15:24:56,368 INFO [IPC Server handler 7 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741857_1033 127.0.0.1:63273 2016-08-18 15:24:56,369 INFO [IPC Server handler 7 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741860_1036 127.0.0.1:63273 2016-08-18 15:24:56,369 DEBUG [ProcedureExecutor-4] master.FullTableBackupProcedure(168): Deleting the snapshot snapshot_1471559070320_ns1_test-1471559060953 for backup backup_1471559069358 succeeded. 2016-08-18 15:24:56,370 INFO [ProcedureExecutor-4] master.FullTableBackupProcedure(462): Backup backup_1471559069358 completed. 2016-08-18 15:24:56,478 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(328): Released /1/table-lock/hbase:backup/write-master:632800000000001 2016-08-18 15:24:56,479 DEBUG [ProcedureExecutor-4] procedure2.ProcedureExecutor(870): Procedure completed in 27.0000sec: FullTableBackupProcedure (targetRootDir=hdfs://localhost:63272/backupUT; backupId=backup_1471559069358; tables=ns1:test-1471559060953,ns2:test-14715590609531,ns3:test-14715590609532,ns4:test-14715590609533) id=13 state=FINISHED 2016-08-18 15:24:56,531 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2fc1fd9c] blockmanagement.BlockManager(3482): BLOCK* BlockManager: ask 127.0.0.1:63273 to delete [blk_1073741857_1033, blk_1073741860_1036, blk_1073741861_1037, blk_1073741864_1040, blk_1073741865_1041, blk_1073741867_1043, blk_1073741868_1044, blk_1073741870_1046] 2016-08-18 15:24:57,675 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=13 2016-08-18 15:24:57,676 DEBUG [main] impl.BackupSystemTable(157): read backup status from hbase:backup for: backup_1471559069358 2016-08-18 15:24:57,681 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:24:57,681 DEBUG [RpcServer.listener,port=63282] ipc.RpcServer$Listener(880): RpcServer.listener,port=63282: connection from 10.22.9.171:63439; # active connections: 4 2016-08-18 15:24:57,682 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:24:57,685 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63439 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:24:57,687 DEBUG [main] backup.TestIncrementalBackup(64): writing 99 rows to ns1:test-1471559060953 2016-08-18 15:24:57,692 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:24:57,692 DEBUG [RpcServer.listener,port=63282] ipc.RpcServer$Listener(880): RpcServer.listener,port=63282: connection from 10.22.9.171:63440; # active connections: 5 2016-08-18 15:24:57,693 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:24:57,693 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63440 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:24:57,694 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:57,701 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:57,704 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:57,706 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:57,708 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:57,710 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:57,712 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:57,714 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:57,870 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:57,872 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:57,874 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:57,875 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:57,877 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:57,879 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:57,880 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:57,908 DEBUG [main] backup.TestIncrementalBackup(75): written 99 rows to ns1:test-1471559060953 2016-08-18 15:24:57,912 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559070013 2016-08-18 15:24:57,918 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559070013 2016-08-18 15:24:57,923 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559070013 2016-08-18 15:24:57,925 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559070013 2016-08-18 15:24:57,928 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559070013 2016-08-18 15:24:57,943 DEBUG [main] backup.TestIncrementalBackup(87): written 5 rows to ns2:test-14715590609531 2016-08-18 15:24:57,945 INFO [main] util.BackupClientUtil(105): Using existing backup root dir: hdfs://localhost:63272/backupUT 2016-08-18 15:24:57,949 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] impl.BackupSystemTable(431): get incr backup table set from hbase:backup 2016-08-18 15:24:57,950 INFO [B.defaultRpcServer.handler=1,queue=0,port=63280] master.HMaster(2641): Incremental backup for the following table set: [ns2:test-14715590609531, ns3:test-14715590609532, ns4:test-14715590609533, ns1:test-1471559060953] 2016-08-18 15:24:57,956 INFO [B.defaultRpcServer.handler=1,queue=0,port=63280] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x72a48fe4 connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:24:57,960 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x72a48fe40x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:24:57,961 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5859565a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:24:57,961 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:24:57,962 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:24:57,962 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x72a48fe4-0x1569fc0e731000f connected 2016-08-18 15:24:57,962 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] backup.BackupInfo(125): CreateBackupContext: 4 ns2:test-14715590609531 2016-08-18 15:24:58,070 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] procedure2.ProcedureExecutor(669): Procedure IncrementalTableBackupProcedure (targetRootDir=hdfs://localhost:63272/backupUT; backupId=backup_1471559097949; tables=ns2:test-14715590609531,ns3:test-14715590609532,ns4:test-14715590609533,ns1:test-1471559060953) id=14 state=RUNNABLE:PREPARE_INCREMENTAL added to the store. 2016-08-18 15:24:58,073 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-18 15:24:58,073 DEBUG [ProcedureExecutor-5] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/hbase:backup/write-master:632800000000002 2016-08-18 15:24:58,074 INFO [ProcedureExecutor-5] master.FullTableBackupProcedure(130): Backup backup_1471559097949 started at 1471559098074. 2016-08-18 15:24:58,074 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1471559097949 set status=RUNNING 2016-08-18 15:24:58,078 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:24:58,078 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63444; # active connections: 8 2016-08-18 15:24:58,078 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:24:58,079 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63444 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:24:58,086 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:24:58,086 DEBUG [RpcServer.listener,port=63282] ipc.RpcServer$Listener(880): RpcServer.listener,port=63282: connection from 10.22.9.171:63445; # active connections: 6 2016-08-18 15:24:58,087 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:24:58,087 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63445 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:24:58,087 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559070025 2016-08-18 15:24:58,088 DEBUG [ProcedureExecutor-5] master.FullTableBackupProcedure(134): Backup session backup_1471559097949 has been started. 2016-08-18 15:24:58,088 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(431): get incr backup table set from hbase:backup 2016-08-18 15:24:58,090 DEBUG [ProcedureExecutor-5] master.IncrementalTableBackupProcedure(216): For incremental backup, current table set is [ns2:test-14715590609531, ns3:test-14715590609532, ns4:test-14715590609533, ns1:test-1471559060953] 2016-08-18 15:24:58,092 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(180): read backup start code from hbase:backup 2016-08-18 15:24:58,093 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(365): read RS log ts from hbase:backup for root=hdfs://localhost:63272/backupUT 2016-08-18 15:24:58,096 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(93): StartCode 1471559040101for backupID backup_1471559097949 2016-08-18 15:24:58,096 INFO [ProcedureExecutor-5] impl.IncrementalBackupManager(104): Execute roll log procedure for incremental backup ... 2016-08-18 15:24:58,102 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:24:58,102 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63446; # active connections: 9 2016-08-18 15:24:58,102 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:24:58,103 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63446 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:24:58,105 INFO [B.defaultRpcServer.handler=4,queue=0,port=63280] master.MasterRpcServices(652): Client=tyu//10.22.9.171 procedure request for: rolllog-proc 2016-08-18 15:24:58,105 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63280] procedure.ProcedureCoordinator(177): Submitting procedure rolllog 2016-08-18 15:24:58,105 INFO [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] procedure.Procedure(196): Starting procedure 'rolllog' 2016-08-18 15:24:58,105 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-18 15:24:58,105 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] procedure.Procedure(204): Procedure 'rolllog' starting 'acquire' 2016-08-18 15:24:58,105 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] procedure.Procedure(247): Starting procedure 'rolllog', kicking off acquire phase on members. 2016-08-18 15:24:58,106 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] zookeeper.ZKUtil(367): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-18 15:24:58,106 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] procedure.ZKProcedureCoordinatorRpcs(94): Creating acquire znode:/1/rolllog-proc/acquired/rolllog 2016-08-18 15:24:58,107 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-18 15:24:58,107 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:63282-0x1569fc0e7310001, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-18 15:24:58,107 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-18 15:24:58,107 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 15:24:58,107 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-18 15:24:58,107 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 15:24:58,107 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,63282,1471559038490 2016-08-18 15:24:58,107 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/rolllog-proc/acquired/rolllog 2016-08-18 15:24:58,107 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(188): Found procedure znode: /1/rolllog-proc/acquired/rolllog 2016-08-18 15:24:58,108 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] zookeeper.ZKUtil(367): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/acquired/rolllog/10.22.9.171,63282,1471559038490 2016-08-18 15:24:58,108 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] procedure.ZKProcedureCoordinatorRpcs(102): Watching for acquire node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,63280,1471559038246 2016-08-18 15:24:58,108 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:63282-0x1569fc0e7310001, quorum=localhost:61765, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-18 15:24:58,108 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/abort/rolllog 2016-08-18 15:24:58,108 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] zookeeper.ZKUtil(367): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/acquired/rolllog/10.22.9.171,63280,1471559038246 2016-08-18 15:24:58,108 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] procedure.Procedure(208): Waiting for all members to 'acquire' 2016-08-18 15:24:58,108 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 35 2016-08-18 15:24:58,108 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/rolllog-proc/acquired/rolllog 2016-08-18 15:24:58,108 INFO [main-EventThread] regionserver.LogRollRegionServerProcedureManager(117): Attempting to run a roll log procedure for backup. 2016-08-18 15:24:58,108 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(214): start proc data length is 35 2016-08-18 15:24:58,108 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(216): Found data for znode:/1/rolllog-proc/acquired/rolllog 2016-08-18 15:24:58,109 INFO [main-EventThread] regionserver.LogRollRegionServerProcedureManager(117): Attempting to run a roll log procedure for backup. 2016-08-18 15:24:58,109 INFO [main-EventThread] regionserver.LogRollBackupSubprocedure(55): Constructing a LogRollBackupSubprocedure. 2016-08-18 15:24:58,109 INFO [main-EventThread] regionserver.LogRollBackupSubprocedure(55): Constructing a LogRollBackupSubprocedure. 2016-08-18 15:24:58,109 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:rolllog 2016-08-18 15:24:58,109 DEBUG [main-EventThread] procedure.ProcedureMember(149): Submitting new Subprocedure:rolllog 2016-08-18 15:24:58,109 DEBUG [member: '10.22.9.171,63282,1471559038490' subprocedure-pool2-thread-1] procedure.Subprocedure(157): Starting subprocedure 'rolllog' with timeout 60000ms 2016-08-18 15:24:58,109 DEBUG [member: '10.22.9.171,63282,1471559038490' subprocedure-pool2-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-18 15:24:58,109 DEBUG [member: '10.22.9.171,63280,1471559038246' subprocedure-pool3-thread-1] procedure.Subprocedure(157): Starting subprocedure 'rolllog' with timeout 60000ms 2016-08-18 15:24:58,110 DEBUG [member: '10.22.9.171,63280,1471559038246' subprocedure-pool3-thread-1] errorhandling.TimeoutExceptionInjector(108): Scheduling process timer to run in: 60000 ms 2016-08-18 15:24:58,110 DEBUG [member: '10.22.9.171,63282,1471559038490' subprocedure-pool2-thread-1] procedure.Subprocedure(165): Subprocedure 'rolllog' starting 'acquire' stage 2016-08-18 15:24:58,110 DEBUG [member: '10.22.9.171,63282,1471559038490' subprocedure-pool2-thread-1] procedure.Subprocedure(167): Subprocedure 'rolllog' locally acquired 2016-08-18 15:24:58,110 DEBUG [member: '10.22.9.171,63282,1471559038490' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,63282,1471559038490' joining acquired barrier for procedure (rolllog) in zk 2016-08-18 15:24:58,110 DEBUG [member: '10.22.9.171,63280,1471559038246' subprocedure-pool3-thread-1] procedure.Subprocedure(165): Subprocedure 'rolllog' starting 'acquire' stage 2016-08-18 15:24:58,110 DEBUG [member: '10.22.9.171,63280,1471559038246' subprocedure-pool3-thread-1] procedure.Subprocedure(167): Subprocedure 'rolllog' locally acquired 2016-08-18 15:24:58,110 DEBUG [member: '10.22.9.171,63280,1471559038246' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(245): Member: '10.22.9.171,63280,1471559038246' joining acquired barrier for procedure (rolllog) in zk 2016-08-18 15:24:58,111 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,63282,1471559038490 2016-08-18 15:24:58,111 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/acquired/rolllog/10.22.9.171,63282,1471559038490 2016-08-18 15:24:58,111 DEBUG [member: '10.22.9.171,63282,1471559038490' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/rolllog-proc/reached/rolllog 2016-08-18 15:24:58,111 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,63282,1471559038490 2016-08-18 15:24:58,111 DEBUG [member: '10.22.9.171,63280,1471559038246' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(253): Watch for global barrier reached:/1/rolllog-proc/reached/rolllog 2016-08-18 15:24:58,111 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/acquired/rolllog/10.22.9.171,63282,1471559038490 2016-08-18 15:24:58,112 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 15:24:58,112 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 15:24:58,112 DEBUG [member: '10.22.9.171,63282,1471559038490' subprocedure-pool2-thread-1] zookeeper.ZKUtil(367): regionserver:63282-0x1569fc0e7310001, quorum=localhost:61765, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog 2016-08-18 15:24:58,112 DEBUG [member: '10.22.9.171,63282,1471559038490' subprocedure-pool2-thread-1] procedure.Subprocedure(172): Subprocedure 'rolllog' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-18 15:24:58,112 DEBUG [member: '10.22.9.171,63280,1471559038246' subprocedure-pool3-thread-1] zookeeper.ZKUtil(367): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog 2016-08-18 15:24:58,112 DEBUG [member: '10.22.9.171,63280,1471559038246' subprocedure-pool3-thread-1] procedure.Subprocedure(172): Subprocedure 'rolllog' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2016-08-18 15:24:58,112 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 15:24:58,112 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 15:24:58,113 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,63280,1471559038246 2016-08-18 15:24:58,113 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,63282,1471559038490 2016-08-18 15:24:58,113 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 15:24:58,114 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 15:24:58,114 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.9.171,63282,1471559038490' joining acquired barrier for procedure 'rolllog' on coordinator 2016-08-18 15:24:58,114 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@52a23a1c[Count = 1] remaining members to acquire global barrier 2016-08-18 15:24:58,114 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,63280,1471559038246 2016-08-18 15:24:58,114 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/acquired/rolllog/10.22.9.171,63280,1471559038246 2016-08-18 15:24:58,114 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/acquired/rolllog/10.22.9.171,63280,1471559038246 2016-08-18 15:24:58,114 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/acquired/rolllog/10.22.9.171,63280,1471559038246 2016-08-18 15:24:58,114 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 15:24:58,114 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 15:24:58,115 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 15:24:58,115 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 15:24:58,115 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,63280,1471559038246 2016-08-18 15:24:58,115 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,63282,1471559038490 2016-08-18 15:24:58,116 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 15:24:58,116 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 15:24:58,116 DEBUG [main-EventThread] procedure.Procedure(298): member: '10.22.9.171,63280,1471559038246' joining acquired barrier for procedure 'rolllog' on coordinator 2016-08-18 15:24:58,116 DEBUG [main-EventThread] procedure.Procedure(307): Waiting on: java.util.concurrent.CountDownLatch@52a23a1c[Count = 0] remaining members to acquire global barrier 2016-08-18 15:24:58,116 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] procedure.Procedure(212): Procedure 'rolllog' starting 'in-barrier' execution. 2016-08-18 15:24:58,116 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] procedure.ZKProcedureCoordinatorRpcs(118): Creating reached barrier zk node:/1/rolllog-proc/reached/rolllog 2016-08-18 15:24:58,117 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:63282-0x1569fc0e7310001, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-18 15:24:58,117 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog 2016-08-18 15:24:58,117 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-18 15:24:58,117 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/rolllog-proc/reached/rolllog 2016-08-18 15:24:58,117 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] zookeeper.ZKUtil(367): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog/10.22.9.171,63282,1471559038490 2016-08-18 15:24:58,117 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog 2016-08-18 15:24:58,117 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(134): Recieved reached global barrier:/1/rolllog-proc/reached/rolllog 2016-08-18 15:24:58,117 DEBUG [member: '10.22.9.171,63282,1471559038490' subprocedure-pool2-thread-1] procedure.Subprocedure(186): Subprocedure 'rolllog' received 'reached' from coordinator. 2016-08-18 15:24:58,117 DEBUG [member: '10.22.9.171,63280,1471559038246' subprocedure-pool3-thread-1] procedure.Subprocedure(186): Subprocedure 'rolllog' received 'reached' from coordinator. 2016-08-18 15:24:58,117 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] zookeeper.ZKUtil(367): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Set watcher on znode that does not yet exist, /1/rolllog-proc/reached/rolllog/10.22.9.171,63280,1471559038246 2016-08-18 15:24:58,117 DEBUG [member: '10.22.9.171,63280,1471559038246' subprocedure-pool3-thread-1] regionserver.LogRollBackupSubprocedurePool(84): Waiting for backup procedure to finish. 2016-08-18 15:24:58,117 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog 2016-08-18 15:24:58,117 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 15:24:58,118 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 15:24:58,117 DEBUG [rs(10.22.9.171,63280,1471559038246)-backup-pool30-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(74): ++ DRPC started: 10.22.9.171,63280,1471559038246 2016-08-18 15:24:58,117 DEBUG [rs(10.22.9.171,63282,1471559038490)-backup-pool29-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(74): ++ DRPC started: 10.22.9.171,63282,1471559038490 2016-08-18 15:24:58,117 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] procedure.Procedure(216): Waiting for all members to 'release' 2016-08-18 15:24:58,117 DEBUG [member: '10.22.9.171,63282,1471559038490' subprocedure-pool2-thread-1] regionserver.LogRollBackupSubprocedurePool(84): Waiting for backup procedure to finish. 2016-08-18 15:24:58,118 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 15:24:58,118 INFO [rs(10.22.9.171,63282,1471559038490)-backup-pool29-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(79): Trying to roll log in backup subprocedure, current log number: 1471559069994 on 10.22.9.171,63282,1471559038490 2016-08-18 15:24:58,118 INFO [rs(10.22.9.171,63280,1471559038246)-backup-pool30-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(79): Trying to roll log in backup subprocedure, current log number: 1471559069590 on 10.22.9.171,63280,1471559038246 2016-08-18 15:24:58,118 DEBUG [regionserver//10.22.9.171:0.logRoller] regionserver.LogRoller(135): WAL roll requested 2016-08-18 15:24:58,118 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 15:24:58,118 DEBUG [master//10.22.9.171:0.logRoller] regionserver.LogRoller(135): WAL roll requested 2016-08-18 15:24:58,119 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,63280,1471559038246 2016-08-18 15:24:58,120 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,63282,1471559038490 2016-08-18 15:24:58,120 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 15:24:58,121 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 15:24:58,122 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 15:24:58,122 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(238): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog 2016-08-18 15:24:58,122 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559098118 2016-08-18 15:24:58,123 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559098118 2016-08-18 15:24:58,128 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577 2016-08-18 15:24:58,128 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:58,130 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577 2016-08-18 15:24:58,130 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:58,135 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741852_1028{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 91 2016-08-18 15:24:58,136 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741851_1027{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 11592 2016-08-18 15:24:58,178 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-18 15:24:58,382 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-18 15:24:58,543 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 with entries=101, filesize=11.32 KB; new WAL /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559098118 2016-08-18 15:24:58,543 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577 with entries=0, filesize=91 B; new WAL /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559098118 2016-08-18 15:24:58,543 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.SequenceIdAccounting(335): not archiving hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 value: 204 oldestFlushing: 9223372036854775807 oldestUnflushed: 106 2016-08-18 15:24:58,544 DEBUG [master//10.22.9.171:0.logRoller] wal.SequenceIdAccounting(335): not archiving hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559040116 value: 9 oldestFlushing: 9223372036854775807 oldestUnflushed: 4 2016-08-18 15:24:58,545 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559064398 to hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559064398 2016-08-18 15:24:58,545 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577 to hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577 2016-08-18 15:24:58,550 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559098547 2016-08-18 15:24:58,550 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559098548 2016-08-18 15:24:58,555 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994 2016-08-18 15:24:58,556 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590 2016-08-18 15:24:58,557 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994 2016-08-18 15:24:58,557 DEBUG [master//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590 2016-08-18 15:24:58,562 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741853_1029{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 91 2016-08-18 15:24:58,563 INFO [IPC Server handler 1 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741854_1030{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 91 2016-08-18 15:24:58,685 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-18 15:24:58,858 DEBUG [10.22.9.171,63282,1471559038490_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 15:24:58,966 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994 with entries=0, filesize=91 B; new WAL /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559098547 2016-08-18 15:24:58,967 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994 to hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994 2016-08-18 15:24:58,968 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590 with entries=0, filesize=91 B; new WAL /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559098548 2016-08-18 15:24:58,969 INFO [master//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590 to hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590 2016-08-18 15:24:58,971 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559098969 2016-08-18 15:24:58,975 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559070013 2016-08-18 15:24:58,976 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559070013 2016-08-18 15:24:58,979 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741855_1031{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 83 2016-08-18 15:24:58,980 DEBUG [rs(10.22.9.171,63280,1471559038246)-backup-pool30-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(86): log roll took 862 2016-08-18 15:24:58,980 INFO [rs(10.22.9.171,63280,1471559038246)-backup-pool30-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(87): After roll log in backup subprocedure, current log number: 1471559098548 on 10.22.9.171,63280,1471559038246 2016-08-18 15:24:58,980 DEBUG [rs(10.22.9.171,63280,1471559038246)-backup-pool30-thread-1] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-18 15:24:58,981 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559070013 with entries=7, filesize=1.17 KB; new WAL /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559098969 2016-08-18 15:24:58,981 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.SequenceIdAccounting(335): not archiving hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559070013 value: 110 oldestFlushing: 9223372036854775807 oldestUnflushed: 106 2016-08-18 15:24:58,982 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(953): Archiving hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559065843 to hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559065843 2016-08-18 15:24:58,984 DEBUG [rs(10.22.9.171,63280,1471559038246)-backup-pool30-thread-1] impl.BackupSystemTable(254): write region server last roll log result to hbase:backup 2016-08-18 15:24:58,985 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559070025 2016-08-18 15:24:59,013 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(665): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 2016-08-18 15:24:59,014 DEBUG [member: '10.22.9.171,63280,1471559038246' subprocedure-pool3-thread-1] procedure.Subprocedure(188): Subprocedure 'rolllog' locally completed 2016-08-18 15:24:59,014 DEBUG [member: '10.22.9.171,63280,1471559038246' subprocedure-pool3-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'rolllog' completed for member '10.22.9.171,63280,1471559038246' in zk 2016-08-18 15:24:59,015 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,63280,1471559038246 2016-08-18 15:24:59,015 DEBUG [member: '10.22.9.171,63280,1471559038246' subprocedure-pool3-thread-1] procedure.Subprocedure(193): Subprocedure 'rolllog' has notified controller of completion 2016-08-18 15:24:59,015 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog/10.22.9.171,63280,1471559038246 2016-08-18 15:24:59,015 DEBUG [member: '10.22.9.171,63280,1471559038246' subprocedure-pool3-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 15:24:59,015 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog/10.22.9.171,63280,1471559038246 2016-08-18 15:24:59,015 DEBUG [member: '10.22.9.171,63280,1471559038246' subprocedure-pool3-thread-1] procedure.Subprocedure(218): Subprocedure 'rolllog' completed. 2016-08-18 15:24:59,015 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog/10.22.9.171,63280,1471559038246 2016-08-18 15:24:59,017 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 15:24:59,017 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 15:24:59,017 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 15:24:59,018 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 15:24:59,019 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,63280,1471559038246 2016-08-18 15:24:59,019 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,63282,1471559038490 2016-08-18 15:24:59,020 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 15:24:59,020 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 15:24:59,020 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559070025 2016-08-18 15:24:59,020 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 15:24:59,021 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,63280,1471559038246 2016-08-18 15:24:59,022 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(862): closing hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559070025 2016-08-18 15:24:59,024 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'rolllog' member '10.22.9.171,63280,1471559038246': 2016-08-18 15:24:59,024 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.9.171,63280,1471559038246' released barrier for procedure'rolllog', counting down latch. Waiting for 1 more 2016-08-18 15:24:59,027 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741856_1032{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 83 2016-08-18 15:24:59,029 INFO [regionserver//10.22.9.171:0.logRoller] wal.FSHLog(886): Rolled WAL /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559070025 with entries=8, filesize=4.28 KB; new WAL /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 2016-08-18 15:24:59,029 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.SequenceIdAccounting(335): not archiving hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559042158 value: 6 oldestFlushing: 9223372036854775807 oldestUnflushed: 4 2016-08-18 15:24:59,029 DEBUG [regionserver//10.22.9.171:0.logRoller] wal.SequenceIdAccounting(335): not archiving hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559070025 value: 14 oldestFlushing: 9223372036854775807 oldestUnflushed: 4 2016-08-18 15:24:59,037 DEBUG [rs(10.22.9.171,63282,1471559038490)-backup-pool29-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(86): log roll took 919 2016-08-18 15:24:59,037 INFO [rs(10.22.9.171,63282,1471559038490)-backup-pool29-thread-1] regionserver.LogRollBackupSubprocedure$RSRollLogTask(87): After roll log in backup subprocedure, current log number: 1471559098547 on 10.22.9.171,63282,1471559038490 2016-08-18 15:24:59,037 DEBUG [rs(10.22.9.171,63282,1471559038490)-backup-pool29-thread-1] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-18 15:24:59,040 DEBUG [rs(10.22.9.171,63282,1471559038490)-backup-pool29-thread-1] impl.BackupSystemTable(254): write region server last roll log result to hbase:backup 2016-08-18 15:24:59,042 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 2016-08-18 15:24:59,043 DEBUG [member: '10.22.9.171,63282,1471559038490' subprocedure-pool2-thread-1] procedure.Subprocedure(188): Subprocedure 'rolllog' locally completed 2016-08-18 15:24:59,043 DEBUG [member: '10.22.9.171,63282,1471559038490' subprocedure-pool2-thread-1] procedure.ZKProcedureMemberRpcs(269): Marking procedure 'rolllog' completed for member '10.22.9.171,63282,1471559038490' in zk 2016-08-18 15:24:59,045 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,63282,1471559038490 2016-08-18 15:24:59,045 DEBUG [member: '10.22.9.171,63282,1471559038490' subprocedure-pool2-thread-1] procedure.Subprocedure(193): Subprocedure 'rolllog' has notified controller of completion 2016-08-18 15:24:59,045 DEBUG [member: '10.22.9.171,63282,1471559038490' subprocedure-pool2-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 15:24:59,045 DEBUG [member: '10.22.9.171,63282,1471559038490' subprocedure-pool2-thread-1] procedure.Subprocedure(218): Subprocedure 'rolllog' completed. 2016-08-18 15:24:59,045 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/reached/rolllog/10.22.9.171,63282,1471559038490 2016-08-18 15:24:59,045 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs$1(100): Ignoring created notification for node:/1/rolllog-proc/reached/rolllog/10.22.9.171,63282,1471559038490 2016-08-18 15:24:59,045 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/reached/rolllog/10.22.9.171,63282,1471559038490 2016-08-18 15:24:59,046 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 15:24:59,046 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 15:24:59,046 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 15:24:59,046 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 15:24:59,047 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,63280,1471559038246 2016-08-18 15:24:59,047 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,63282,1471559038490 2016-08-18 15:24:59,047 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 15:24:59,048 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 15:24:59,048 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 15:24:59,048 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,63280,1471559038246 2016-08-18 15:24:59,049 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,63282,1471559038490 2016-08-18 15:24:59,049 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(221): Finished data from procedure 'rolllog' member '10.22.9.171,63282,1471559038490': 2016-08-18 15:24:59,049 DEBUG [main-EventThread] procedure.Procedure(329): Member: '10.22.9.171,63282,1471559038490' released barrier for procedure'rolllog', counting down latch. Waiting for 0 more 2016-08-18 15:24:59,049 INFO [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] procedure.Procedure(221): Procedure 'rolllog' execution completed 2016-08-18 15:24:59,049 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] procedure.Procedure(230): Running finish phase. 2016-08-18 15:24:59,049 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] procedure.Procedure(281): Finished coordinator procedure - removing self from list of running procedures 2016-08-18 15:24:59,049 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] procedure.ZKProcedureCoordinatorRpcs(165): Attempting to clean out zk node for op:rolllog 2016-08-18 15:24:59,049 INFO [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] procedure.ZKProcedureUtil(285): Clearing all znodes for procedure rolllogincluding nodes /1/rolllog-proc/acquired /1/rolllog-proc/reached /1/rolllog-proc/abort 2016-08-18 15:24:59,050 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:63282-0x1569fc0e7310001, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-18 15:24:59,050 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-18 15:24:59,050 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/abort/rolllog 2016-08-18 15:24:59,050 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(80): Received created event:/1/rolllog-proc/abort/rolllog 2016-08-18 15:24:59,050 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-18 15:24:59,051 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-18 15:24:59,051 DEBUG [main-EventThread] procedure.ZKProcedureCoordinatorRpcs$1(196): Node created: /1/rolllog-proc/abort/rolllog 2016-08-18 15:24:59,051 DEBUG [main-EventThread] procedure.ZKProcedureUtil(244): Current zk system: 2016-08-18 15:24:59,051 DEBUG [main-EventThread] procedure.ZKProcedureUtil(246): |-/1/rolllog-proc 2016-08-18 15:24:59,051 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] zookeeper.ZKUtil(365): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/acquired/rolllog/10.22.9.171,63280,1471559038246 2016-08-18 15:24:59,051 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:63282-0x1569fc0e7310001, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-18 15:24:59,051 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-18 15:24:59,051 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-18 15:24:59,051 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-acquired 2016-08-18 15:24:59,051 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] zookeeper.ZKUtil(365): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/acquired/rolllog/10.22.9.171,63282,1471559038490 2016-08-18 15:24:59,051 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(317): Aborting procedure member for znode /1/rolllog-proc/abort/rolllog 2016-08-18 15:24:59,052 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 15:24:59,052 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,63280,1471559038246 2016-08-18 15:24:59,052 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,63282,1471559038490 2016-08-18 15:24:59,052 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-abort 2016-08-18 15:24:59,053 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] zookeeper.ZKUtil(365): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/reached/rolllog/10.22.9.171,63280,1471559038246 2016-08-18 15:24:59,053 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 15:24:59,053 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] zookeeper.ZKUtil(365): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Set watcher on existing znode=/1/rolllog-proc/reached/rolllog/10.22.9.171,63282,1471559038490 2016-08-18 15:24:59,053 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-reached 2016-08-18 15:24:59,053 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |----rolllog 2016-08-18 15:24:59,053 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,63280,1471559038246 2016-08-18 15:24:59,054 DEBUG [main-EventThread] procedure.ZKProcedureUtil(263): |-------10.22.9.171,63282,1471559038490 2016-08-18 15:24:59,054 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:63282-0x1569fc0e7310001, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-18 15:24:59,055 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-18 15:24:59,055 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 15:24:59,055 DEBUG [(10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1] errorhandling.TimeoutExceptionInjector(88): Marking timer as complete - no error notifications will be received for this timer. 2016-08-18 15:24:59,055 DEBUG [main-EventThread] zookeeper.ZKUtil(624): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Unable to get data of znode /1/rolllog-proc/abort/rolllog because node does not exist (not an error) 2016-08-18 15:24:59,055 INFO [B.defaultRpcServer.handler=4,queue=0,port=63280] master.LogRollMasterProcedureManager(116): Done waiting - exec procedure for rolllog 2016-08-18 15:24:59,055 INFO [B.defaultRpcServer.handler=4,queue=0,port=63280] master.LogRollMasterProcedureManager(117): Distributed roll log procedure is successful! 2016-08-18 15:24:59,055 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-18 15:24:59,055 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:63282-0x1569fc0e7310001, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/abort 2016-08-18 15:24:59,056 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-18 15:24:59,056 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-18 15:24:59,056 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(110): Received procedure abort children changed event: /1/rolllog-proc/abort 2016-08-18 15:24:59,056 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(140): Checking for aborted procedures on node: '/1/rolllog-proc/abort' 2016-08-18 15:24:59,056 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,63282,1471559038490 2016-08-18 15:24:59,056 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2016-08-18 15:24:59,056 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog/10.22.9.171,63280,1471559038246 2016-08-18 15:24:59,056 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/acquired/rolllog 2016-08-18 15:24:59,056 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/acquired 2016-08-18 15:24:59,056 INFO [main-EventThread] procedure.ZKProcedureMemberRpcs$1(107): Received procedure start children changed event: /1/rolllog-proc/acquired 2016-08-18 15:24:59,056 DEBUG [main-EventThread] procedure.ZKProcedureMemberRpcs(156): Looking for new procedures under znode:'/1/rolllog-proc/acquired' 2016-08-18 15:24:59,056 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,63282,1471559038490 2016-08-18 15:24:59,057 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-18 15:24:59,057 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog/10.22.9.171,63280,1471559038246 2016-08-18 15:24:59,057 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/reached/rolllog 2016-08-18 15:24:59,057 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rolllog-proc/abort/rolllog 2016-08-18 15:24:59,057 DEBUG [ProcedureExecutor-5] client.HBaseAdmin(2481): Waiting a max of 300000 ms for procedure 'rolllog-proc : rolllog'' to complete. (max 857 ms per retry) 2016-08-18 15:24:59,057 DEBUG [ProcedureExecutor-5] client.HBaseAdmin(2490): (#1) Sleeping: 100ms while waiting for procedure completion. 2016-08-18 15:24:59,161 DEBUG [ProcedureExecutor-5] client.HBaseAdmin(2496): Getting current status of procedure from master... 2016-08-18 15:24:59,167 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] master.MasterRpcServices(904): Checking to see if procedure from request:rolllog-proc is done 2016-08-18 15:24:59,169 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(222): read region server last roll log result to hbase:backup 2016-08-18 15:24:59,172 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(215): In getLogFilesForNewBackup() olderTimestamps: {10.22.9.171:63282=1471559040101, 10.22.9.171:63280=1471559040101} newestTimestamps: {10.22.9.171:63282=1471559069994, 10.22.9.171:63280=1471559069590} 2016-08-18 15:24:59,175 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559098548 2016-08-18 15:24:59,175 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(274): excluding hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559098548 1471559098548 <= 1471559069590 2016-08-18 15:24:59,175 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559040116 2016-08-18 15:24:59,175 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(278): not excluding hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559040116 1471559040116 <= 1471559069590 2016-08-18 15:24:59,175 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559098118 2016-08-18 15:24:59,176 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(274): excluding hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559098118 1471559098118 <= 1471559069590 2016-08-18 15:24:59,176 WARN [ProcedureExecutor-5] wal.DefaultWALProvider(349): Cannot parse a server name from path=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta; Not a host:port pair: 10.22.9.171,63280,1471559038246.meta 2016-08-18 15:24:59,176 WARN [ProcedureExecutor-5] util.BackupServerUtil(237): Skip log file (can't parse): hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta 2016-08-18 15:24:59,177 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559098547 2016-08-18 15:24:59,177 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(274): excluding hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559098547 1471559098547 <= 1471559069994 2016-08-18 15:24:59,177 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559042158 2016-08-18 15:24:59,177 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(278): not excluding hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559042158 1471559042158 <= 1471559069994 2016-08-18 15:24:59,177 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559070025 2016-08-18 15:24:59,177 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(274): excluding hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559070025 1471559070025 <= 1471559069994 2016-08-18 15:24:59,178 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 2016-08-18 15:24:59,178 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(274): excluding hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 1471559098984 <= 1471559069994 2016-08-18 15:24:59,178 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:59,178 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(278): not excluding hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 1471559069577 <= 1471559069994 2016-08-18 15:24:59,178 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559098118 2016-08-18 15:24:59,178 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(274): excluding hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559098118 1471559098118 <= 1471559069994 2016-08-18 15:24:59,178 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559070013 2016-08-18 15:24:59,178 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(274): excluding hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559070013 1471559070013 <= 1471559069994 2016-08-18 15:24:59,178 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(261): currentLogFile: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559098969 2016-08-18 15:24:59,178 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(274): excluding hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559098969 1471559098969 <= 1471559069994 2016-08-18 15:24:59,179 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(318): excluding old wal hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559040101 1471559040101 <= 1471559040101 2016-08-18 15:24:59,179 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(318): excluding old wal hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559040101 1471559040101 <= 1471559040101 2016-08-18 15:24:59,180 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(500): get WAL files from hbase:backup 2016-08-18 15:24:59,185 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:63272/backupUT/backup_1471559069358/hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559040101 2016-08-18 15:24:59,185 DEBUG [ProcedureExecutor-5] impl.IncrementalBackupManager(191): skipping wal /hdfs://localhost:63272/backupUT/backup_1471559069358/hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559040101 2016-08-18 15:24:59,185 DEBUG [ProcedureExecutor-5] backup.BackupInfo(313): setting incr backup file list 2016-08-18 15:24:59,185 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559040116 2016-08-18 15:24:59,185 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559042158 2016-08-18 15:24:59,185 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:24:59,185 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590 2016-08-18 15:24:59,185 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577 2016-08-18 15:24:59,185 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994 2016-08-18 15:24:59,185 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559064398 2016-08-18 15:24:59,185 DEBUG [ProcedureExecutor-5] backup.BackupInfo(315): hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559065843 2016-08-18 15:24:59,187 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-18 15:24:59,294 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x77c2685b connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:24:59,298 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x77c2685b0x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:24:59,299 DEBUG [ProcedureExecutor-5] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@26570180, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:24:59,299 DEBUG [ProcedureExecutor-5] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:24:59,299 DEBUG [ProcedureExecutor-5] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:24:59,299 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x77c2685b-0x1569fc0e7310010 connected 2016-08-18 15:24:59,302 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:24:59,302 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63454; # active connections: 10 2016-08-18 15:24:59,303 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:24:59,303 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63454 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:24:59,304 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(175): Attempting to copy table info for:ns3:test-14715590609532 2016-08-18 15:24:59,316 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741891_1067{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:24:59,318 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:63272/backupUT/backup_1471559097949/ns3/test-14715590609532/.tabledesc/.tableinfo.0000000001 2016-08-18 15:24:59,319 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-18 15:24:59,320 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x77c2685b connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:24:59,323 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x77c2685b0x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:24:59,325 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x77c2685b-0x1569fc0e7310011 connected 2016-08-18 15:24:59,327 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(188): Starting to write region info for table ns3:test-14715590609532 2016-08-18 15:24:59,334 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741892_1068{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:24:59,334 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(197): Finished writing region info for table ns3:test-14715590609532 2016-08-18 15:24:59,335 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(175): Attempting to copy table info for:ns4:test-14715590609533 2016-08-18 15:24:59,345 INFO [IPC Server handler 4 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741893_1069{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:24:59,347 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:63272/backupUT/backup_1471559097949/ns4/test-14715590609533/.tabledesc/.tableinfo.0000000001 2016-08-18 15:24:59,348 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-18 15:24:59,348 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x77c2685b connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:24:59,350 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x77c2685b0x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:24:59,353 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x77c2685b-0x1569fc0e7310012 connected 2016-08-18 15:24:59,354 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(188): Starting to write region info for table ns4:test-14715590609533 2016-08-18 15:24:59,361 INFO [IPC Server handler 5 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741894_1070{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:24:59,361 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(197): Finished writing region info for table ns4:test-14715590609533 2016-08-18 15:24:59,362 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(175): Attempting to copy table info for:ns2:test-14715590609531 2016-08-18 15:24:59,374 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741895_1071{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:24:59,376 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:63272/backupUT/backup_1471559097949/ns2/test-14715590609531/.tabledesc/.tableinfo.0000000001 2016-08-18 15:24:59,377 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-18 15:24:59,377 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x77c2685b connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:24:59,379 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x77c2685b0x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:24:59,384 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x77c2685b-0x1569fc0e7310013 connected 2016-08-18 15:24:59,388 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(188): Starting to write region info for table ns2:test-14715590609531 2016-08-18 15:24:59,394 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741896_1072{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 50 2016-08-18 15:24:59,595 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns3/test-14715590609532/e196ea4c6ebf18d7f346b1209ee442d8/f 2016-08-18 15:24:59,598 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns4/test-14715590609533/a4882d4755c241a0547202f501525250/f 2016-08-18 15:24:59,598 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/meta/1588230740/info 2016-08-18 15:24:59,599 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/meta/1588230740/table 2016-08-18 15:24:59,599 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/backup/f8c39842b4cd271b3d073c7bb2738adb/meta 2016-08-18 15:24:59,600 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/backup/f8c39842b4cd271b3d073c7bb2738adb/session 2016-08-18 15:24:59,602 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/namespace/1934919e607520cdbfbecc5343937a9f/info 2016-08-18 15:24:59,605 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 15:24:59,607 INFO [10.22.9.171,63280,1471559038246_ChoreService_1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x13b5e94d connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:24:59,610 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x13b5e94d0x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:24:59,611 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@181cc0cb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:24:59,611 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:24:59,611 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:24:59,611 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(580): Has backup sessions from hbase:backup 2016-08-18 15:24:59,611 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x13b5e94d-0x1569fc0e7310014 connected 2016-08-18 15:24:59,614 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:24:59,614 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63466; # active connections: 11 2016-08-18 15:24:59,615 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:24:59,615 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63466 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:24:59,619 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:24:59,619 DEBUG [RpcServer.listener,port=63282] ipc.RpcServer$Listener(880): RpcServer.listener,port=63282: connection from 10.22.9.171:63467; # active connections: 7 2016-08-18 15:24:59,620 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:24:59,621 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63467 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:24:59,623 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559040101 2016-08-18 15:24:59,625 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559040101 2016-08-18 15:24:59,625 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590 2016-08-18 15:24:59,626 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(80): Didn't find this log in hbase:backup, keeping: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590 2016-08-18 15:24:59,626 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577 2016-08-18 15:24:59,626 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(80): Didn't find this log in hbase:backup, keeping: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577 2016-08-18 15:24:59,627 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559040101 2016-08-18 15:24:59,627 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559040101 2016-08-18 15:24:59,627 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994 2016-08-18 15:24:59,628 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(80): Didn't find this log in hbase:backup, keeping: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994 2016-08-18 15:24:59,628 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559064398 2016-08-18 15:24:59,629 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(80): Didn't find this log in hbase:backup, keeping: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559064398 2016-08-18 15:24:59,629 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559065843 2016-08-18 15:24:59,630 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(80): Didn't find this log in hbase:backup, keeping: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559065843 2016-08-18 15:24:59,630 INFO [10.22.9.171,63280,1471559038246_ChoreService_1] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310014 2016-08-18 15:24:59,631 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:24:59,632 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (1654176215) to /10.22.9.171:63282 from tyu: closed 2016-08-18 15:24:59,632 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63466 because read count=-1. Number of active connections: 11 2016-08-18 15:24:59,632 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel$8(566): IPC Client (-2093602305) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:24:59,632 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Listener(912): RpcServer.listener,port=63282: DISCONNECTING client 10.22.9.171:63467 because read count=-1. Number of active connections: 7 2016-08-18 15:24:59,801 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(197): Finished writing region info for table ns2:test-14715590609531 2016-08-18 15:24:59,803 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(175): Attempting to copy table info for:ns1:test-1471559060953 2016-08-18 15:24:59,815 INFO [IPC Server handler 5 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741897_1073{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:24:59,818 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:63272/backupUT/backup_1471559097949/ns1/test-1471559060953/.tabledesc/.tableinfo.0000000001 2016-08-18 15:24:59,818 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(184): Finished copying tableinfo. 2016-08-18 15:24:59,819 INFO [ProcedureExecutor-5] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x77c2685b connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:24:59,821 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(590): hbase-admin-on-hconnection-0x77c2685b0x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:24:59,822 DEBUG [ProcedureExecutor-5-EventThread] zookeeper.ZooKeeperWatcher(674): hbase-admin-on-hconnection-0x77c2685b-0x1569fc0e7310015 connected 2016-08-18 15:24:59,824 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(188): Starting to write region info for table ns1:test-1471559060953 2016-08-18 15:24:59,831 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741898_1074{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:24:59,832 DEBUG [ProcedureExecutor-5] util.BackupServerUtil(197): Finished writing region info for table ns1:test-1471559060953 2016-08-18 15:24:59,832 INFO [ProcedureExecutor-5] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310010 2016-08-18 15:24:59,833 DEBUG [ProcedureExecutor-5] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:24:59,833 INFO [ProcedureExecutor-5] master.IncrementalTableBackupProcedure(125): Incremental copy is starting. 2016-08-18 15:24:59,833 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel$8(566): IPC Client (221148243) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:24:59,833 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63454 because read count=-1. Number of active connections: 10 2016-08-18 15:24:59,838 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(308): Doing COPY_TYPE_DISTCP 2016-08-18 15:24:59,864 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(318): DistCp options: [hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559040116, hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559042158, hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577, hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590, hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577, hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994, hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559064398, hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559065843, hdfs://localhost:63272/backupUT/backup_1471559097949/WALs] 2016-08-18 15:24:59,957 WARN [ProcedureExecutor-5] mapreduce.JobResourceUploader(64): Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-08-18 15:25:00,103 INFO [IPC Server handler 5 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741899_1075{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:00,128 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741900_1076{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 1629 2016-08-18 15:25:00,192 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-18 15:25:00,553 INFO [IPC Server handler 7 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741901_1077{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:00,571 INFO [IPC Server handler 5 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741902_1078{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:00,588 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741903_1079{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:00,612 INFO [IPC Server handler 3 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741904_1080{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:00,629 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741905_1081{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:00,645 INFO [IPC Server handler 7 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741906_1082{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:01,081 INFO [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService$BackupDistCp(247): Progress: 100.0% subTask: 1.0 mapProgress: 1.0 2016-08-18 15:25:01,081 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1471559097949 set status=RUNNING 2016-08-18 15:25:01,083 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 2016-08-18 15:25:01,085 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(140): Backup progress data "100%" has been updated to hbase:backup for backup_1471559097949 2016-08-18 15:25:01,085 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService$BackupDistCp(256): Backup progress data updated to hbase:backup: "Progress: 100.0% - 36491 bytes copied." 2016-08-18 15:25:01,085 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService$BackupDistCp(271): DistCp job-id: job_local1341040037_0005 completed: true true 2016-08-18 15:25:01,091 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService$BackupDistCp(274): Counters: 23 File System Counters FILE: Number of bytes read=94097970 FILE: Number of bytes written=94410181 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=93033 HDFS: Number of bytes written=2223940 HDFS: Number of read operations=632 HDFS: Number of large read operations=0 HDFS: Number of write operations=289 Map-Reduce Framework Map input records=8 Map output records=0 Input split bytes=262 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=0 Total committed heap usage (bytes)=1231552512 File Input Format Counters Bytes Read=2639 File Output Format Counters Bytes Written=0 org.apache.hadoop.tools.mapred.CopyMapper$Counter BYTESCOPIED=36491 BYTESEXPECTED=36491 COPY=8 2016-08-18 15:25:01,092 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(326): list of hdfs://localhost:63272/backupUT/backup_1471559097949/WALs for distcp 0 2016-08-18 15:25:01,095 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471559100571; access_time=1471559100563; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:25:01,095 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559040116; isDirectory=false; length=981; replication=1; blocksize=134217728; modification_time=1471559100104; access_time=1471559100092; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:25:01,095 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471559100589; access_time=1471559100580; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:25:01,095 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471559100612; access_time=1471559100599; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:25:01,095 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559042158; isDirectory=false; length=1629; replication=1; blocksize=134217728; modification_time=1471559100532; access_time=1471559100120; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:25:01,095 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559064398; isDirectory=false; length=10957; replication=1; blocksize=134217728; modification_time=1471559100630; access_time=1471559100622; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:25:01,096 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577; isDirectory=false; length=11592; replication=1; blocksize=134217728; modification_time=1471559100553; access_time=1471559100545; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:25:01,096 DEBUG [ProcedureExecutor-5] mapreduce.MapReduceBackupCopyService(331): LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559065843; isDirectory=false; length=11059; replication=1; blocksize=134217728; modification_time=1471559100646; access_time=1471559100638; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:25:01,101 INFO [ProcedureExecutor-5] master.IncrementalTableBackupProcedure(176): Incremental copy from hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559040116,hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559042158,hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577,hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590,hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577,hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994,hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559064398,hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559065843 to hdfs://localhost:63272/backupUT/backup_1471559097949/WALs finished. 2016-08-18 15:25:01,101 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(480): add WAL files to hbase:backup: backup_1471559097949 hdfs://localhost:63272/backupUT files [hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559040116,hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559042158,hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577,hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590,hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577,hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994,hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559064398,hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559065843] 2016-08-18 15:25:01,101 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559040116 2016-08-18 15:25:01,101 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559042158 2016-08-18 15:25:01,101 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577 2016-08-18 15:25:01,101 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590 2016-08-18 15:25:01,101 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577 2016-08-18 15:25:01,101 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994 2016-08-18 15:25:01,101 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559064398 2016-08-18 15:25:01,101 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(483): add :hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559065843 2016-08-18 15:25:01,103 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 2016-08-18 15:25:01,214 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(365): read RS log ts from hbase:backup for root=hdfs://localhost:63272/backupUT 2016-08-18 15:25:01,219 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(337): write RS log time stamps to hbase:backup for tables [ns3:test-14715590609532,ns4:test-14715590609533,ns2:test-14715590609531,ns1:test-1471559060953] 2016-08-18 15:25:01,221 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 2016-08-18 15:25:01,222 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(365): read RS log ts from hbase:backup for root=hdfs://localhost:63272/backupUT 2016-08-18 15:25:01,226 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(205): write backup start code to hbase:backup 1471559069590 2016-08-18 15:25:01,227 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 2016-08-18 15:25:01,228 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-18 15:25:01,228 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471559097949 2016-08-18 15:25:01,228 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 15:25:01,228 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 15:25:01,233 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 15:25:01,233 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:63272/backupUT backup_1471559097949 INCREMENTAL 2016-08-18 15:25:01,233 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471559097949 2016-08-18 15:25:01,233 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 15:25:01,233 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 15:25:01,237 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 15:25:01,243 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741907_1083{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 814 2016-08-18 15:25:01,648 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:63272/backupUT/backup_1471559097949/ns3/test-14715590609532/.backup.manifest 2016-08-18 15:25:01,648 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-18 15:25:01,649 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471559097949 2016-08-18 15:25:01,649 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 15:25:01,649 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 15:25:01,653 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 15:25:01,653 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:63272/backupUT backup_1471559097949 INCREMENTAL 2016-08-18 15:25:01,654 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471559097949 2016-08-18 15:25:01,654 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 15:25:01,654 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 15:25:01,657 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 15:25:01,664 INFO [IPC Server handler 3 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741908_1084{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 814 2016-08-18 15:25:02,068 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:63272/backupUT/backup_1471559097949/ns4/test-14715590609533/.backup.manifest 2016-08-18 15:25:02,068 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-18 15:25:02,068 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471559097949 2016-08-18 15:25:02,068 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 15:25:02,068 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 15:25:02,073 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 15:25:02,073 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:63272/backupUT backup_1471559097949 INCREMENTAL 2016-08-18 15:25:02,073 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471559097949 2016-08-18 15:25:02,073 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 15:25:02,073 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 15:25:02,077 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 15:25:02,083 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741909_1085{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:02,084 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:63272/backupUT/backup_1471559097949/ns2/test-14715590609531/.backup.manifest 2016-08-18 15:25:02,084 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 1 tables exist in table set. 2016-08-18 15:25:02,084 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471559097949 2016-08-18 15:25:02,084 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 15:25:02,084 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 15:25:02,088 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 15:25:02,088 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:63272/backupUT backup_1471559097949 INCREMENTAL 2016-08-18 15:25:02,088 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471559097949 2016-08-18 15:25:02,088 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 15:25:02,088 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 15:25:02,091 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 15:25:02,098 INFO [IPC Server handler 9 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741910_1086{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:02,099 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:63272/backupUT/backup_1471559097949/ns1/test-1471559060953/.backup.manifest 2016-08-18 15:25:02,099 DEBUG [ProcedureExecutor-5] impl.BackupManifest(455): 4 tables exist in table set. 2016-08-18 15:25:02,099 DEBUG [ProcedureExecutor-5] impl.BackupManager(302): Getting the direct ancestors of the current backup backup_1471559097949 2016-08-18 15:25:02,099 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(271): get backup history from hbase:backup 2016-08-18 15:25:02,099 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(304): get backup contexts from hbase:backup 2016-08-18 15:25:02,102 DEBUG [ProcedureExecutor-5] impl.BackupManager(359): Got 1 ancestors for the current backup. 2016-08-18 15:25:02,102 DEBUG [ProcedureExecutor-5] impl.BackupManifest(594): hdfs://localhost:63272/backupUT backup_1471559097949 INCREMENTAL 2016-08-18 15:25:02,109 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741911_1087{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:02,109 INFO [ProcedureExecutor-5] impl.BackupManifest(490): Manifest file stored to hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/.backup.manifest 2016-08-18 15:25:02,109 DEBUG [ProcedureExecutor-5] master.FullTableBackupProcedure(439): in-fly convert code here, provided by future jira 2016-08-18 15:25:02,109 DEBUG [ProcedureExecutor-5] master.FullTableBackupProcedure(447): Backup backup_1471559097949 finished: type=INCREMENTAL,tablelist=ns3:test-14715590609532;ns4:test-14715590609533;ns2:test-14715590609531;ns1:test-1471559060953,targetRootDir=hdfs://localhost:63272/backupUT,startts=1471559098074,completets=1471559101228,bytescopied=0 2016-08-18 15:25:02,110 DEBUG [ProcedureExecutor-5] impl.BackupSystemTable(122): update backup status in hbase:backup for: backup_1471559097949 set status=COMPLETE 2016-08-18 15:25:02,111 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 2016-08-18 15:25:02,112 INFO [ProcedureExecutor-5] master.FullTableBackupProcedure(462): Backup backup_1471559097949 completed. 2016-08-18 15:25:02,199 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-18 15:25:02,218 DEBUG [ProcedureExecutor-5] lock.ZKInterProcessLockBase(328): Released /1/table-lock/hbase:backup/write-master:632800000000002 2016-08-18 15:25:02,219 DEBUG [ProcedureExecutor-5] procedure2.ProcedureExecutor(870): Procedure completed in 4.1510sec: IncrementalTableBackupProcedure (targetRootDir=hdfs://localhost:63272/backupUT; backupId=backup_1471559097949; tables=ns2:test-14715590609531,ns3:test-14715590609532,ns4:test-14715590609533,ns1:test-1471559060953) id=14 state=FINISHED 2016-08-18 15:25:02,385 DEBUG [10.22.9.171,63319,1471559042214_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 15:25:02,418 DEBUG [10.22.9.171,63314,1471559042157_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 15:25:02,639 DEBUG [10.22.9.171,63314,1471559042157_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/backup/850b903ca0af513aa15775825a9a082c/meta 2016-08-18 15:25:02,639 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/meta/1588230740/info 2016-08-18 15:25:02,641 DEBUG [10.22.9.171,63314,1471559042157_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/backup/850b903ca0af513aa15775825a9a082c/session 2016-08-18 15:25:02,641 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/meta/1588230740/table 2016-08-18 15:25:02,642 DEBUG [10.22.9.171,63314,1471559042157_ChoreService_1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/namespace/a3b1a9605e4887d65b7f50b16f400740/info 2016-08-18 15:25:06,203 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=14 2016-08-18 15:25:06,204 DEBUG [main] impl.BackupSystemTable(157): read backup status from hbase:backup for: backup_1471559097949 2016-08-18 15:25:06,211 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:63272/backupUT/backup_1471559069358/ns1/test-1471559060953/.backup.manifest 2016-08-18 15:25:06,214 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471559069358 2016-08-18 15:25:06,215 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471559069358/ns1/test-1471559060953/.backup.manifest 2016-08-18 15:25:06,216 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:63272/backupUT/backup_1471559069358/ns2/test-14715590609531/.backup.manifest 2016-08-18 15:25:06,219 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471559069358 2016-08-18 15:25:06,219 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471559069358/ns2/test-14715590609531/.backup.manifest 2016-08-18 15:25:06,220 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:63272/backupUT/backup_1471559069358/ns3/test-14715590609532/.backup.manifest 2016-08-18 15:25:06,223 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471559069358 2016-08-18 15:25:06,223 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471559069358/ns3/test-14715590609532/.backup.manifest 2016-08-18 15:25:06,224 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:63272/backupUT/backup_1471559069358/ns4/test-14715590609533/.backup.manifest 2016-08-18 15:25:06,227 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471559069358 2016-08-18 15:25:06,227 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471559069358/ns4/test-14715590609533/.backup.manifest 2016-08-18 15:25:06,228 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6935004a connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:06,233 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x6935004a0x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:06,234 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4927e1a7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:06,235 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:06,235 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x6935004a-0x1569fc0e7310016 connected 2016-08-18 15:25:06,235 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:06,237 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:06,237 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63509; # active connections: 10 2016-08-18 15:25:06,238 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:06,238 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63509 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:06,239 INFO [main] impl.RestoreClientImpl(167): HBase table ns1:table1_restore does not exist. It will be created during restore process 2016-08-18 15:25:06,240 INFO [main] impl.RestoreClientImpl(167): HBase table ns2:table2_restore does not exist. It will be created during restore process 2016-08-18 15:25:06,241 INFO [main] impl.RestoreClientImpl(167): HBase table ns3:table3_restore does not exist. It will be created during restore process 2016-08-18 15:25:06,242 INFO [main] impl.RestoreClientImpl(167): HBase table ns4:table4_restore does not exist. It will be created during restore process 2016-08-18 15:25:06,242 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310016 2016-08-18 15:25:06,242 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:25:06,245 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (-1111990922) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:06,245 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63509 because read count=-1. Number of active connections: 10 2016-08-18 15:25:06,245 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-18 15:25:06,249 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:63272/backupUT/backup_1471559069358/ns1/test-1471559060953/.backup.manifest 2016-08-18 15:25:06,252 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471559069358 2016-08-18 15:25:06,252 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471559069358/ns1/test-1471559060953/.backup.manifest 2016-08-18 15:25:06,252 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns1:test-1471559060953' to 'ns1:table1_restore' from full backup image hdfs://localhost:63272/backupUT/backup_1471559069358/ns1/test-1471559060953 2016-08-18 15:25:06,262 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x743daf5 connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:06,265 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x743daf50x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:06,266 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d93146f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:06,266 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:06,266 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:06,267 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x743daf5-0x1569fc0e7310017 connected 2016-08-18 15:25:06,268 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:06,268 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63513; # active connections: 10 2016-08-18 15:25:06,269 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:06,269 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63513 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:06,270 INFO [main] util.RestoreServerUtil(596): Creating target table 'ns1:table1_restore' 2016-08-18 15:25:06,270 DEBUG [main] util.RestoreServerUtil(495): Parsing region dir: hdfs://localhost:63272/backupUT/backup_1471559069358/ns1/test-1471559060953/archive/data/ns1/test-1471559060953/c61c3bf2f83c0b95289129ff052b32c3 2016-08-18 15:25:06,271 DEBUG [main] util.RestoreServerUtil(525): Parsing family dir [hdfs://localhost:63272/backupUT/backup_1471559069358/ns1/test-1471559060953/archive/data/ns1/test-1471559060953/c61c3bf2f83c0b95289129ff052b32c3/f in region [hdfs://localhost:63272/backupUT/backup_1471559069358/ns1/test-1471559060953/archive/data/ns1/test-1471559060953/c61c3bf2f83c0b95289129ff052b32c3] 2016-08-18 15:25:06,272 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1079848, freeSize=1042882456, maxSize=1043962304, heapSize=1079848, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 15:25:06,275 DEBUG [main] util.RestoreServerUtil(545): Trying to figure out region boundaries hfile=hdfs://localhost:63272/backupUT/backup_1471559069358/ns1/test-1471559060953/archive/data/ns1/test-1471559060953/c61c3bf2f83c0b95289129ff052b32c3/f/99d6d04705a54ac7971e0c1e430a2855 first=row0 last=row98 2016-08-18 15:25:06,282 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:25:06,282 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63514; # active connections: 11 2016-08-18 15:25:06,283 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:06,283 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63514 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:06,285 INFO [B.defaultRpcServer.handler=0,queue=0,port=63280] master.HMaster(1495): Client=tyu//10.22.9.171 create 'ns1:table1_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-18 15:25:06,392 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns1:table1_restore) id=15 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 15:25:06,396 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-18 15:25:06,398 DEBUG [ProcedureExecutor-6] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns1:table1_restore/write-master:632800000000000 2016-08-18 15:25:06,501 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-18 15:25:06,514 INFO [IPC Server handler 4 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741912_1088{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:06,516 DEBUG [ProcedureExecutor-6] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns1/table1_restore/.tabledesc/.tableinfo.0000000001 2016-08-18 15:25:06,517 INFO [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(6162): creating HRegion ns1:table1_restore HTD == 'ns1:table1_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp Table name == ns1:table1_restore 2016-08-18 15:25:06,525 INFO [IPC Server handler 1 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741913_1089{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:06,526 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(736): Instantiated ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:06,526 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1419): Closing ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c.: disabling compactions & flushes 2016-08-18 15:25:06,526 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1446): Updates disabled for region ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:06,526 INFO [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1552): Closed ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:06,635 DEBUG [ProcedureExecutor-6] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c."} 2016-08-18 15:25:06,636 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:06,638 INFO [ProcedureExecutor-6] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 15:25:06,705 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-18 15:25:06,745 INFO [ProcedureExecutor-6] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,63282,1471559038490 2016-08-18 15:25:06,746 ERROR [ProcedureExecutor-6] master.TableStateManager(134): Unable to get table ns1:table1_restore state org.apache.hadoop.hbase.TableNotFoundException: ns1:table1_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 15:25:06,746 INFO [ProcedureExecutor-6] master.RegionStates(1106): Transition {854a47f76da7ac7120b78cba57ef767c state=OFFLINE, ts=1471559106745, server=null} to {854a47f76da7ac7120b78cba57ef767c state=PENDING_OPEN, ts=1471559106746, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:06,746 INFO [ProcedureExecutor-6] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. with state=PENDING_OPEN, sn=10.22.9.171,63282,1471559038490 2016-08-18 15:25:06,747 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:06,749 INFO [PriorityRpcServer.handler=4,queue=0,port=63282] regionserver.RSRpcServices(1666): Open ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:06,753 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] regionserver.HRegion(6339): Opening region: {ENCODED => 854a47f76da7ac7120b78cba57ef767c, NAME => 'ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c.', STARTKEY => '', ENDKEY => ''} 2016-08-18 15:25:06,754 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table1_restore 854a47f76da7ac7120b78cba57ef767c 2016-08-18 15:25:06,754 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] regionserver.HRegion(736): Instantiated ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:06,757 INFO [StoreOpener-854a47f76da7ac7120b78cba57ef767c-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1079848, freeSize=1042882456, maxSize=1043962304, heapSize=1079848, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 15:25:06,757 INFO [StoreOpener-854a47f76da7ac7120b78cba57ef767c-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 15:25:06,758 DEBUG [StoreOpener-854a47f76da7ac7120b78cba57ef767c-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/f 2016-08-18 15:25:06,759 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c 2016-08-18 15:25:06,764 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 15:25:06,764 INFO [RS_OPEN_REGION-10.22.9.171:63282-2] regionserver.HRegion(871): Onlined 854a47f76da7ac7120b78cba57ef767c; next sequenceid=2 2016-08-18 15:25:06,768 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559098118 2016-08-18 15:25:06,769 INFO [PostOpenDeployTasks:854a47f76da7ac7120b78cba57ef767c] regionserver.HRegionServer(1952): Post open deploy tasks for ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:06,770 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] master.AssignmentManager(2884): Got transition OPENED for {854a47f76da7ac7120b78cba57ef767c state=PENDING_OPEN, ts=1471559106746, server=10.22.9.171,63282,1471559038490} from 10.22.9.171,63282,1471559038490 2016-08-18 15:25:06,770 INFO [B.defaultRpcServer.handler=1,queue=0,port=63280] master.RegionStates(1106): Transition {854a47f76da7ac7120b78cba57ef767c state=PENDING_OPEN, ts=1471559106746, server=10.22.9.171,63282,1471559038490} to {854a47f76da7ac7120b78cba57ef767c state=OPEN, ts=1471559106770, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:06,770 INFO [B.defaultRpcServer.handler=1,queue=0,port=63280] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. with state=OPEN, openSeqNum=2, server=10.22.9.171,63282,1471559038490 2016-08-18 15:25:06,770 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:06,771 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] master.RegionStates(452): Onlined 854a47f76da7ac7120b78cba57ef767c on 10.22.9.171,63282,1471559038490 2016-08-18 15:25:06,771 DEBUG [ProcedureExecutor-6] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,63282,1471559038490 2016-08-18 15:25:06,771 DEBUG [ProcedureExecutor-6] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471559106771,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:table1_restore"} 2016-08-18 15:25:06,771 ERROR [B.defaultRpcServer.handler=1,queue=0,port=63280] master.TableStateManager(134): Unable to get table ns1:table1_restore state org.apache.hadoop.hbase.TableNotFoundException: ns1:table1_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 15:25:06,772 DEBUG [PostOpenDeployTasks:854a47f76da7ac7120b78cba57ef767c] regionserver.HRegionServer(1979): Finished post open deploy task for ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:06,772 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] handler.OpenRegionHandler(126): Opened ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. on 10.22.9.171,63282,1471559038490 2016-08-18 15:25:06,772 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:06,773 INFO [ProcedureExecutor-6] hbase.MetaTableAccessor(1700): Updated table ns1:table1_restore state to ENABLED in META 2016-08-18 15:25:07,011 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-18 15:25:07,098 DEBUG [ProcedureExecutor-6] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns1:table1_restore/write-master:632800000000000 2016-08-18 15:25:07,098 DEBUG [ProcedureExecutor-6] procedure2.ProcedureExecutor(870): Procedure completed in 708msec: CreateTableProcedure (table=ns1:table1_restore) id=15 owner=tyu state=FINISHED 2016-08-18 15:25:07,515 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=15 2016-08-18 15:25:07,516 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns1:table1_restore completed 2016-08-18 15:25:07,516 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:25:07,516 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310017 2016-08-18 15:25:07,519 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:25:07,521 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63513 because read count=-1. Number of active connections: 11 2016-08-18 15:25:07,521 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel$8(566): IPC Client (1012439396) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:07,521 DEBUG [main] util.RestoreServerUtil(255): cluster hold the backup image: hdfs://localhost:63272; local cluster node: hdfs://localhost:63272 2016-08-18 15:25:07,521 DEBUG [main] util.RestoreServerUtil(261): File hdfs://localhost:63272/backupUT/backup_1471559069358/ns1/test-1471559060953/archive/data/ns1/test-1471559060953 on local cluster, back it up before restore 2016-08-18 15:25:07,521 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel$8(566): IPC Client (422991827) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:07,521 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63514 because read count=-1. Number of active connections: 11 2016-08-18 15:25:07,537 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741914_1090{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:07,538 DEBUG [main] util.RestoreServerUtil(271): Copied to temporary path on local cluster: /user/tyu/hbase-staging/restore 2016-08-18 15:25:07,539 DEBUG [main] util.RestoreServerUtil(355): TableArchivePath for bulkload using tempPath: /user/tyu/hbase-staging/restore 2016-08-18 15:25:07,556 DEBUG [main] util.RestoreServerUtil(363): Restoring HFiles from directory hdfs://localhost:63272/user/tyu/hbase-staging/restore/c61c3bf2f83c0b95289129ff052b32c3 2016-08-18 15:25:07,557 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x74e23ec2 connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:07,559 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x74e23ec20x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:07,563 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c82690a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:07,563 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:07,563 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:07,564 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x74e23ec2-0x1569fc0e7310018 connected 2016-08-18 15:25:07,566 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:07,566 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63519; # active connections: 10 2016-08-18 15:25:07,566 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:07,567 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63519 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:07,572 DEBUG [main] client.ConnectionImplementation(604): Table ns1:table1_restore should be available 2016-08-18 15:25:07,581 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:25:07,581 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63520; # active connections: 11 2016-08-18 15:25:07,582 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:07,582 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63520 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:07,597 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1079848, freeSize=1042882456, maxSize=1043962304, heapSize=1079848, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 15:25:07,601 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:63272/user/tyu/hbase-staging/restore/c61c3bf2f83c0b95289129ff052b32c3/f/99d6d04705a54ac7971e0c1e430a2855 first=row0 last=row98 2016-08-18 15:25:07,614 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c., hostname=10.22.9.171,63282,1471559038490, seqNum=2 for row with hfile group [{[B@7831ca93,hdfs://localhost:63272/user/tyu/hbase-staging/restore/c61c3bf2f83c0b95289129ff052b32c3/f/99d6d04705a54ac7971e0c1e430a2855}] 2016-08-18 15:25:07,622 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:07,622 DEBUG [RpcServer.listener,port=63282] ipc.RpcServer$Listener(880): RpcServer.listener,port=63282: connection from 10.22.9.171:63521; # active connections: 7 2016-08-18 15:25:07,623 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:07,623 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63521 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:07,623 INFO [B.defaultRpcServer.handler=4,queue=0,port=63282] regionserver.HStore(670): Validating hfile at hdfs://localhost:63272/user/tyu/hbase-staging/restore/c61c3bf2f83c0b95289129ff052b32c3/f/99d6d04705a54ac7971e0c1e430a2855 for inclusion in store f region ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:07,627 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63282] regionserver.HStore(682): HFile bounds: first=row0 last=row98 2016-08-18 15:25:07,627 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63282] regionserver.HStore(684): Region bounds: first= last= 2016-08-18 15:25:07,630 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63282] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:63272/user/tyu/hbase-staging/restore/c61c3bf2f83c0b95289129ff052b32c3/f/99d6d04705a54ac7971e0c1e430a2855 as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/f/e6998b28ec6e4f8cbe153a46906e710c_SeqId_4_ 2016-08-18 15:25:07,631 INFO [B.defaultRpcServer.handler=4,queue=0,port=63282] regionserver.HStore(742): Loaded HFile hdfs://localhost:63272/user/tyu/hbase-staging/restore/c61c3bf2f83c0b95289129ff052b32c3/f/99d6d04705a54ac7971e0c1e430a2855 into store 'f' as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/f/e6998b28ec6e4f8cbe153a46906e710c_SeqId_4_ - updating store file list. 2016-08-18 15:25:07,637 INFO [B.defaultRpcServer.handler=4,queue=0,port=63282] regionserver.HStore(777): Loaded HFile hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/f/e6998b28ec6e4f8cbe153a46906e710c_SeqId_4_ into store 'f 2016-08-18 15:25:07,637 INFO [B.defaultRpcServer.handler=4,queue=0,port=63282] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:63272/user/tyu/hbase-staging/restore/c61c3bf2f83c0b95289129ff052b32c3/f/99d6d04705a54ac7971e0c1e430a2855 into store f (new location: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/f/e6998b28ec6e4f8cbe153a46906e710c_SeqId_4_) 2016-08-18 15:25:07,642 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559098118 2016-08-18 15:25:07,645 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:25:07,645 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310018 2016-08-18 15:25:07,646 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:25:07,647 INFO [main] impl.RestoreClientImpl(292): ns1:test-1471559060953 has been successfully restored to ns1:table1_restore 2016-08-18 15:25:07,647 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-18 15:25:07,647 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63519 because read count=-1. Number of active connections: 11 2016-08-18 15:25:07,647 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471559069358 hdfs://localhost:63272/backupUT/backup_1471559069358/ns1/test-1471559060953/ 2016-08-18 15:25:07,647 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel$8(566): IPC Client (-1567555078) to /10.22.9.171:63282 from tyu: closed 2016-08-18 15:25:07,647 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Listener(912): RpcServer.listener,port=63282: DISCONNECTING client 10.22.9.171:63521 because read count=-1. Number of active connections: 7 2016-08-18 15:25:07,647 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63520 because read count=-1. Number of active connections: 11 2016-08-18 15:25:07,647 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel$8(566): IPC Client (-1865803824) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:07,647 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel$8(566): IPC Client (-2025593484) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:07,647 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-18 15:25:07,649 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:63272/backupUT/backup_1471559069358/ns2/test-14715590609531/.backup.manifest 2016-08-18 15:25:07,652 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471559069358 2016-08-18 15:25:07,652 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471559069358/ns2/test-14715590609531/.backup.manifest 2016-08-18 15:25:07,652 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns2:test-14715590609531' to 'ns2:table2_restore' from full backup image hdfs://localhost:63272/backupUT/backup_1471559069358/ns2/test-14715590609531 2016-08-18 15:25:07,661 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x28acf61a connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:07,662 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x28acf61a0x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:07,663 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@548b5997, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:07,663 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:07,663 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:07,664 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x28acf61a-0x1569fc0e7310019 connected 2016-08-18 15:25:07,665 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:07,665 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63525; # active connections: 10 2016-08-18 15:25:07,666 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:07,666 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63525 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:07,666 INFO [main] util.RestoreServerUtil(596): Creating target table 'ns2:table2_restore' 2016-08-18 15:25:07,666 DEBUG [main] util.RestoreServerUtil(495): Parsing region dir: hdfs://localhost:63272/backupUT/backup_1471559069358/ns2/test-14715590609531/archive/data/ns2/test-14715590609531/eafb138c6dd37e9e90df990bbe563d21 2016-08-18 15:25:07,668 DEBUG [main] util.RestoreServerUtil(525): Parsing family dir [hdfs://localhost:63272/backupUT/backup_1471559069358/ns2/test-14715590609531/archive/data/ns2/test-14715590609531/eafb138c6dd37e9e90df990bbe563d21/f in region [hdfs://localhost:63272/backupUT/backup_1471559069358/ns2/test-14715590609531/archive/data/ns2/test-14715590609531/eafb138c6dd37e9e90df990bbe563d21] 2016-08-18 15:25:07,668 INFO [main] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1079848, freeSize=1042882456, maxSize=1043962304, heapSize=1079848, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 15:25:07,671 DEBUG [main] util.RestoreServerUtil(545): Trying to figure out region boundaries hfile=hdfs://localhost:63272/backupUT/backup_1471559069358/ns2/test-14715590609531/archive/data/ns2/test-14715590609531/eafb138c6dd37e9e90df990bbe563d21/f/6da6c850be474a888309ae7f6e7279f0 first=row0 last=row98 2016-08-18 15:25:07,673 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:25:07,673 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63526; # active connections: 11 2016-08-18 15:25:07,674 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:07,675 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63526 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:07,676 INFO [B.defaultRpcServer.handler=2,queue=0,port=63280] master.HMaster(1495): Client=tyu//10.22.9.171 create 'ns2:table2_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-18 15:25:07,784 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns2:table2_restore) id=16 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 15:25:07,788 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-18 15:25:07,789 DEBUG [ProcedureExecutor-7] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns2:table2_restore/write-master:632800000000000 2016-08-18 15:25:07,892 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-18 15:25:07,906 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741915_1091{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:07,908 DEBUG [ProcedureExecutor-7] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns2/table2_restore/.tabledesc/.tableinfo.0000000001 2016-08-18 15:25:07,909 INFO [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(6162): creating HRegion ns2:table2_restore HTD == 'ns2:table2_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp Table name == ns2:table2_restore 2016-08-18 15:25:07,917 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741916_1092{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:07,918 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(736): Instantiated ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:07,919 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1419): Closing ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a.: disabling compactions & flushes 2016-08-18 15:25:07,919 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1446): Updates disabled for region ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:07,919 INFO [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1552): Closed ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:08,029 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a."} 2016-08-18 15:25:08,030 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:08,031 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 15:25:08,099 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-18 15:25:08,140 INFO [ProcedureExecutor-7] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,63282,1471559038490 2016-08-18 15:25:08,141 ERROR [ProcedureExecutor-7] master.TableStateManager(134): Unable to get table ns2:table2_restore state org.apache.hadoop.hbase.TableNotFoundException: ns2:table2_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 15:25:08,141 INFO [ProcedureExecutor-7] master.RegionStates(1106): Transition {bf3c6d412d1b40a1b33f3f2c30bb496a state=OFFLINE, ts=1471559108140, server=null} to {bf3c6d412d1b40a1b33f3f2c30bb496a state=PENDING_OPEN, ts=1471559108141, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:08,141 INFO [ProcedureExecutor-7] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. with state=PENDING_OPEN, sn=10.22.9.171,63282,1471559038490 2016-08-18 15:25:08,142 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:08,143 INFO [PriorityRpcServer.handler=0,queue=0,port=63282] regionserver.RSRpcServices(1666): Open ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:08,148 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-0] regionserver.HRegion(6339): Opening region: {ENCODED => bf3c6d412d1b40a1b33f3f2c30bb496a, NAME => 'ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a.', STARTKEY => '', ENDKEY => ''} 2016-08-18 15:25:08,148 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table2_restore bf3c6d412d1b40a1b33f3f2c30bb496a 2016-08-18 15:25:08,148 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-0] regionserver.HRegion(736): Instantiated ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:08,151 INFO [StoreOpener-bf3c6d412d1b40a1b33f3f2c30bb496a-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1079848, freeSize=1042882456, maxSize=1043962304, heapSize=1079848, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 15:25:08,151 INFO [StoreOpener-bf3c6d412d1b40a1b33f3f2c30bb496a-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 15:25:08,152 DEBUG [StoreOpener-bf3c6d412d1b40a1b33f3f2c30bb496a-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/f 2016-08-18 15:25:08,153 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a 2016-08-18 15:25:08,157 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 15:25:08,157 INFO [RS_OPEN_REGION-10.22.9.171:63282-0] regionserver.HRegion(871): Onlined bf3c6d412d1b40a1b33f3f2c30bb496a; next sequenceid=2 2016-08-18 15:25:08,158 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559098969 2016-08-18 15:25:08,159 INFO [PostOpenDeployTasks:bf3c6d412d1b40a1b33f3f2c30bb496a] regionserver.HRegionServer(1952): Post open deploy tasks for ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:08,160 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63280] master.AssignmentManager(2884): Got transition OPENED for {bf3c6d412d1b40a1b33f3f2c30bb496a state=PENDING_OPEN, ts=1471559108141, server=10.22.9.171,63282,1471559038490} from 10.22.9.171,63282,1471559038490 2016-08-18 15:25:08,160 INFO [B.defaultRpcServer.handler=3,queue=0,port=63280] master.RegionStates(1106): Transition {bf3c6d412d1b40a1b33f3f2c30bb496a state=PENDING_OPEN, ts=1471559108141, server=10.22.9.171,63282,1471559038490} to {bf3c6d412d1b40a1b33f3f2c30bb496a state=OPEN, ts=1471559108160, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:08,160 INFO [B.defaultRpcServer.handler=3,queue=0,port=63280] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. with state=OPEN, openSeqNum=2, server=10.22.9.171,63282,1471559038490 2016-08-18 15:25:08,160 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:08,161 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63280] master.RegionStates(452): Onlined bf3c6d412d1b40a1b33f3f2c30bb496a on 10.22.9.171,63282,1471559038490 2016-08-18 15:25:08,161 DEBUG [ProcedureExecutor-7] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,63282,1471559038490 2016-08-18 15:25:08,161 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471559108161,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:table2_restore"} 2016-08-18 15:25:08,161 ERROR [B.defaultRpcServer.handler=3,queue=0,port=63280] master.TableStateManager(134): Unable to get table ns2:table2_restore state org.apache.hadoop.hbase.TableNotFoundException: ns2:table2_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 15:25:08,162 DEBUG [PostOpenDeployTasks:bf3c6d412d1b40a1b33f3f2c30bb496a] regionserver.HRegionServer(1979): Finished post open deploy task for ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:08,164 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-0] handler.OpenRegionHandler(126): Opened ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. on 10.22.9.171,63282,1471559038490 2016-08-18 15:25:08,164 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:08,165 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1700): Updated table ns2:table2_restore state to ENABLED in META 2016-08-18 15:25:08,401 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-18 15:25:08,494 DEBUG [ProcedureExecutor-7] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns2:table2_restore/write-master:632800000000000 2016-08-18 15:25:08,494 DEBUG [ProcedureExecutor-7] procedure2.ProcedureExecutor(870): Procedure completed in 707msec: CreateTableProcedure (table=ns2:table2_restore) id=16 owner=tyu state=FINISHED 2016-08-18 15:25:08,909 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=16 2016-08-18 15:25:08,910 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns2:table2_restore completed 2016-08-18 15:25:08,910 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:25:08,910 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310019 2016-08-18 15:25:08,913 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:25:08,914 DEBUG [main] util.RestoreServerUtil(255): cluster hold the backup image: hdfs://localhost:63272; local cluster node: hdfs://localhost:63272 2016-08-18 15:25:08,914 DEBUG [main] util.RestoreServerUtil(261): File hdfs://localhost:63272/backupUT/backup_1471559069358/ns2/test-14715590609531/archive/data/ns2/test-14715590609531 on local cluster, back it up before restore 2016-08-18 15:25:08,914 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63526 because read count=-1. Number of active connections: 11 2016-08-18 15:25:08,914 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63525 because read count=-1. Number of active connections: 11 2016-08-18 15:25:08,914 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel$8(566): IPC Client (-201898323) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:08,914 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel$8(566): IPC Client (-667834931) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:08,931 INFO [IPC Server handler 4 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741917_1093{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:08,932 DEBUG [main] util.RestoreServerUtil(271): Copied to temporary path on local cluster: /user/tyu/hbase-staging/restore 2016-08-18 15:25:08,932 DEBUG [main] util.RestoreServerUtil(355): TableArchivePath for bulkload using tempPath: /user/tyu/hbase-staging/restore 2016-08-18 15:25:08,948 DEBUG [main] util.RestoreServerUtil(363): Restoring HFiles from directory hdfs://localhost:63272/user/tyu/hbase-staging/restore/eafb138c6dd37e9e90df990bbe563d21 2016-08-18 15:25:08,949 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1df671fd connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:08,952 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x1df671fd0x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:08,953 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@623d87ee, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:08,953 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:08,953 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:08,954 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x1df671fd-0x1569fc0e731001a connected 2016-08-18 15:25:08,956 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:08,956 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63531; # active connections: 10 2016-08-18 15:25:08,957 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:08,957 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63531 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:08,963 DEBUG [main] client.ConnectionImplementation(604): Table ns2:table2_restore should be available 2016-08-18 15:25:08,970 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:25:08,970 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63532; # active connections: 11 2016-08-18 15:25:08,970 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:08,970 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63532 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:08,975 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1079848, freeSize=1042882456, maxSize=1043962304, heapSize=1079848, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 15:25:08,978 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:63272/user/tyu/hbase-staging/restore/eafb138c6dd37e9e90df990bbe563d21/f/6da6c850be474a888309ae7f6e7279f0 first=row0 last=row98 2016-08-18 15:25:08,981 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a., hostname=10.22.9.171,63282,1471559038490, seqNum=2 for row with hfile group [{[B@6b3fefb6,hdfs://localhost:63272/user/tyu/hbase-staging/restore/eafb138c6dd37e9e90df990bbe563d21/f/6da6c850be474a888309ae7f6e7279f0}] 2016-08-18 15:25:08,987 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:08,987 DEBUG [RpcServer.listener,port=63282] ipc.RpcServer$Listener(880): RpcServer.listener,port=63282: connection from 10.22.9.171:63533; # active connections: 7 2016-08-18 15:25:08,988 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:08,988 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63533 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:08,989 INFO [B.defaultRpcServer.handler=3,queue=0,port=63282] regionserver.HStore(670): Validating hfile at hdfs://localhost:63272/user/tyu/hbase-staging/restore/eafb138c6dd37e9e90df990bbe563d21/f/6da6c850be474a888309ae7f6e7279f0 for inclusion in store f region ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:08,992 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63282] regionserver.HStore(682): HFile bounds: first=row0 last=row98 2016-08-18 15:25:08,992 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63282] regionserver.HStore(684): Region bounds: first= last= 2016-08-18 15:25:08,994 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63282] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:63272/user/tyu/hbase-staging/restore/eafb138c6dd37e9e90df990bbe563d21/f/6da6c850be474a888309ae7f6e7279f0 as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/f/65825414117048bfb844bd376703334b_SeqId_4_ 2016-08-18 15:25:08,995 INFO [B.defaultRpcServer.handler=3,queue=0,port=63282] regionserver.HStore(742): Loaded HFile hdfs://localhost:63272/user/tyu/hbase-staging/restore/eafb138c6dd37e9e90df990bbe563d21/f/6da6c850be474a888309ae7f6e7279f0 into store 'f' as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/f/65825414117048bfb844bd376703334b_SeqId_4_ - updating store file list. 2016-08-18 15:25:09,001 INFO [B.defaultRpcServer.handler=3,queue=0,port=63282] regionserver.HStore(777): Loaded HFile hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/f/65825414117048bfb844bd376703334b_SeqId_4_ into store 'f 2016-08-18 15:25:09,001 INFO [B.defaultRpcServer.handler=3,queue=0,port=63282] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:63272/user/tyu/hbase-staging/restore/eafb138c6dd37e9e90df990bbe563d21/f/6da6c850be474a888309ae7f6e7279f0 into store f (new location: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/f/65825414117048bfb844bd376703334b_SeqId_4_) 2016-08-18 15:25:09,002 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559098969 2016-08-18 15:25:09,003 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:25:09,003 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e731001a 2016-08-18 15:25:09,005 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:25:09,006 INFO [main] impl.RestoreClientImpl(292): ns2:test-14715590609531 has been successfully restored to ns2:table2_restore 2016-08-18 15:25:09,006 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Listener(912): RpcServer.listener,port=63282: DISCONNECTING client 10.22.9.171:63533 because read count=-1. Number of active connections: 7 2016-08-18 15:25:09,006 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63531 because read count=-1. Number of active connections: 11 2016-08-18 15:25:09,006 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-18 15:25:09,006 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63532 because read count=-1. Number of active connections: 11 2016-08-18 15:25:09,006 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel$8(566): IPC Client (441382763) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:09,006 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (-793050894) to /10.22.9.171:63282 from tyu: closed 2016-08-18 15:25:09,006 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (907815463) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:09,006 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471559069358 hdfs://localhost:63272/backupUT/backup_1471559069358/ns2/test-14715590609531/ 2016-08-18 15:25:09,007 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-18 15:25:09,007 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:63272/backupUT/backup_1471559069358/ns3/test-14715590609532/.backup.manifest 2016-08-18 15:25:09,010 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471559069358 2016-08-18 15:25:09,010 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471559069358/ns3/test-14715590609532/.backup.manifest 2016-08-18 15:25:09,010 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns3:test-14715590609532' to 'ns3:table3_restore' from full backup image hdfs://localhost:63272/backupUT/backup_1471559069358/ns3/test-14715590609532 2016-08-18 15:25:09,018 DEBUG [main] util.RestoreServerUtil(109): Folder tableArchivePath: hdfs://localhost:63272/backupUT/backup_1471559069358/ns3/test-14715590609532/archive/data/ns3/test-14715590609532 does not exists 2016-08-18 15:25:09,018 DEBUG [main] util.RestoreServerUtil(315): find table descriptor but no archive dir for table ns3:test-14715590609532, will only create table 2016-08-18 15:25:09,019 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4dc7cc28 connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:09,021 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x4dc7cc280x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:09,021 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e9c7270, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:09,022 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:09,022 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:09,022 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x4dc7cc28-0x1569fc0e731001b connected 2016-08-18 15:25:09,024 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:09,024 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63537; # active connections: 10 2016-08-18 15:25:09,026 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:09,026 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63537 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:09,027 INFO [main] util.RestoreServerUtil(596): Creating target table 'ns3:table3_restore' 2016-08-18 15:25:09,028 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:25:09,028 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63538; # active connections: 11 2016-08-18 15:25:09,029 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:09,029 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63538 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:09,030 INFO [B.defaultRpcServer.handler=1,queue=0,port=63280] master.HMaster(1495): Client=tyu//10.22.9.171 create 'ns3:table3_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-18 15:25:09,135 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns3:table3_restore) id=17 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 15:25:09,137 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-18 15:25:09,139 DEBUG [ProcedureExecutor-1] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns3:table3_restore/write-master:632800000000000 2016-08-18 15:25:09,242 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-18 15:25:09,255 INFO [IPC Server handler 7 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741918_1094{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 291 2016-08-18 15:25:09,449 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-18 15:25:09,664 DEBUG [ProcedureExecutor-1] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns3/table3_restore/.tabledesc/.tableinfo.0000000001 2016-08-18 15:25:09,666 INFO [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(6162): creating HRegion ns3:table3_restore HTD == 'ns3:table3_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp Table name == ns3:table3_restore 2016-08-18 15:25:09,680 INFO [IPC Server handler 3 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741919_1095{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 45 2016-08-18 15:25:09,757 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-18 15:25:10,087 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(736): Instantiated ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:25:10,088 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1419): Closing ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064.: disabling compactions & flushes 2016-08-18 15:25:10,088 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1446): Updates disabled for region ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:25:10,088 INFO [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1552): Closed ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:25:10,197 DEBUG [ProcedureExecutor-1] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064."} 2016-08-18 15:25:10,198 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:10,199 INFO [ProcedureExecutor-1] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 15:25:10,261 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-18 15:25:10,304 INFO [ProcedureExecutor-1] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,63282,1471559038490 2016-08-18 15:25:10,305 ERROR [ProcedureExecutor-1] master.TableStateManager(134): Unable to get table ns3:table3_restore state org.apache.hadoop.hbase.TableNotFoundException: ns3:table3_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 15:25:10,305 INFO [ProcedureExecutor-1] master.RegionStates(1106): Transition {64e80db997c7530f46efbcdcb1606064 state=OFFLINE, ts=1471559110304, server=null} to {64e80db997c7530f46efbcdcb1606064 state=PENDING_OPEN, ts=1471559110305, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:10,306 INFO [ProcedureExecutor-1] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. with state=PENDING_OPEN, sn=10.22.9.171,63282,1471559038490 2016-08-18 15:25:10,306 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:10,308 INFO [PriorityRpcServer.handler=1,queue=1,port=63282] regionserver.RSRpcServices(1666): Open ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:25:10,312 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-1] regionserver.HRegion(6339): Opening region: {ENCODED => 64e80db997c7530f46efbcdcb1606064, NAME => 'ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064.', STARTKEY => '', ENDKEY => ''} 2016-08-18 15:25:10,313 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-1] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table3_restore 64e80db997c7530f46efbcdcb1606064 2016-08-18 15:25:10,313 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-1] regionserver.HRegion(736): Instantiated ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:25:10,316 INFO [StoreOpener-64e80db997c7530f46efbcdcb1606064-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1079848, freeSize=1042882456, maxSize=1043962304, heapSize=1079848, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 15:25:10,317 INFO [StoreOpener-64e80db997c7530f46efbcdcb1606064-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 15:25:10,318 DEBUG [StoreOpener-64e80db997c7530f46efbcdcb1606064-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns3/table3_restore/64e80db997c7530f46efbcdcb1606064/f 2016-08-18 15:25:10,318 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-1] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns3/table3_restore/64e80db997c7530f46efbcdcb1606064 2016-08-18 15:25:10,323 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns3/table3_restore/64e80db997c7530f46efbcdcb1606064/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 15:25:10,323 INFO [RS_OPEN_REGION-10.22.9.171:63282-1] regionserver.HRegion(871): Onlined 64e80db997c7530f46efbcdcb1606064; next sequenceid=2 2016-08-18 15:25:10,324 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559098547 2016-08-18 15:25:10,325 INFO [PostOpenDeployTasks:64e80db997c7530f46efbcdcb1606064] regionserver.HRegionServer(1952): Post open deploy tasks for ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:25:10,325 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] master.AssignmentManager(2884): Got transition OPENED for {64e80db997c7530f46efbcdcb1606064 state=PENDING_OPEN, ts=1471559110305, server=10.22.9.171,63282,1471559038490} from 10.22.9.171,63282,1471559038490 2016-08-18 15:25:10,325 INFO [B.defaultRpcServer.handler=0,queue=0,port=63280] master.RegionStates(1106): Transition {64e80db997c7530f46efbcdcb1606064 state=PENDING_OPEN, ts=1471559110305, server=10.22.9.171,63282,1471559038490} to {64e80db997c7530f46efbcdcb1606064 state=OPEN, ts=1471559110325, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:10,326 INFO [B.defaultRpcServer.handler=0,queue=0,port=63280] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. with state=OPEN, openSeqNum=2, server=10.22.9.171,63282,1471559038490 2016-08-18 15:25:10,326 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:10,327 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] master.RegionStates(452): Onlined 64e80db997c7530f46efbcdcb1606064 on 10.22.9.171,63282,1471559038490 2016-08-18 15:25:10,327 DEBUG [ProcedureExecutor-1] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,63282,1471559038490 2016-08-18 15:25:10,327 DEBUG [ProcedureExecutor-1] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471559110327,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:table3_restore"} 2016-08-18 15:25:10,327 ERROR [B.defaultRpcServer.handler=0,queue=0,port=63280] master.TableStateManager(134): Unable to get table ns3:table3_restore state org.apache.hadoop.hbase.TableNotFoundException: ns3:table3_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 15:25:10,327 DEBUG [PostOpenDeployTasks:64e80db997c7530f46efbcdcb1606064] regionserver.HRegionServer(1979): Finished post open deploy task for ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:25:10,328 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-1] handler.OpenRegionHandler(126): Opened ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. on 10.22.9.171,63282,1471559038490 2016-08-18 15:25:10,328 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:10,329 INFO [ProcedureExecutor-1] hbase.MetaTableAccessor(1700): Updated table ns3:table3_restore state to ENABLED in META 2016-08-18 15:25:10,657 DEBUG [ProcedureExecutor-1] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns3:table3_restore/write-master:632800000000000 2016-08-18 15:25:10,658 DEBUG [ProcedureExecutor-1] procedure2.ProcedureExecutor(870): Procedure completed in 1.5160sec: CreateTableProcedure (table=ns3:table3_restore) id=17 owner=tyu state=FINISHED 2016-08-18 15:25:11,266 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=17 2016-08-18 15:25:11,266 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns3:table3_restore completed 2016-08-18 15:25:11,266 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:25:11,266 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e731001b 2016-08-18 15:25:11,267 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:25:11,268 INFO [main] impl.RestoreClientImpl(292): ns3:test-14715590609532 has been successfully restored to ns3:table3_restore 2016-08-18 15:25:11,268 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63537 because read count=-1. Number of active connections: 11 2016-08-18 15:25:11,268 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-18 15:25:11,269 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471559069358 hdfs://localhost:63272/backupUT/backup_1471559069358/ns3/test-14715590609532/ 2016-08-18 15:25:11,268 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (545294366) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:11,268 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel$8(566): IPC Client (-527348322) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:11,268 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63538 because read count=-1. Number of active connections: 11 2016-08-18 15:25:11,269 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-18 15:25:11,270 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:63272/backupUT/backup_1471559069358/ns4/test-14715590609533/.backup.manifest 2016-08-18 15:25:11,273 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471559069358 2016-08-18 15:25:11,273 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471559069358/ns4/test-14715590609533/.backup.manifest 2016-08-18 15:25:11,273 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns4:test-14715590609533' to 'ns4:table4_restore' from full backup image hdfs://localhost:63272/backupUT/backup_1471559069358/ns4/test-14715590609533 2016-08-18 15:25:11,279 DEBUG [main] util.RestoreServerUtil(109): Folder tableArchivePath: hdfs://localhost:63272/backupUT/backup_1471559069358/ns4/test-14715590609533/archive/data/ns4/test-14715590609533 does not exists 2016-08-18 15:25:11,279 DEBUG [main] util.RestoreServerUtil(315): find table descriptor but no archive dir for table ns4:test-14715590609533, will only create table 2016-08-18 15:25:11,280 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x243c5bf6 connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:11,282 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x243c5bf60x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:11,282 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3d0e0d0a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:11,283 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:11,283 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:11,283 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x243c5bf6-0x1569fc0e731001c connected 2016-08-18 15:25:11,284 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:11,284 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63545; # active connections: 10 2016-08-18 15:25:11,285 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:11,285 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63545 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:11,286 INFO [main] util.RestoreServerUtil(596): Creating target table 'ns4:table4_restore' 2016-08-18 15:25:11,288 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:25:11,288 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63546; # active connections: 11 2016-08-18 15:25:11,288 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:11,288 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63546 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:11,290 INFO [B.defaultRpcServer.handler=0,queue=0,port=63280] master.HMaster(1495): Client=tyu//10.22.9.171 create 'ns4:table4_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2016-08-18 15:25:11,394 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] procedure2.ProcedureExecutor(669): Procedure CreateTableProcedure (table=ns4:table4_restore) id=18 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 15:25:11,397 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-18 15:25:11,399 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns4:table4_restore/write-master:632800000000000 2016-08-18 15:25:11,504 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-18 15:25:11,516 INFO [IPC Server handler 9 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741920_1096{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:11,518 DEBUG [ProcedureExecutor-0] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns4/table4_restore/.tabledesc/.tableinfo.0000000001 2016-08-18 15:25:11,519 INFO [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(6162): creating HRegion ns4:table4_restore HTD == 'ns4:table4_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp Table name == ns4:table4_restore 2016-08-18 15:25:11,527 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741921_1097{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:11,528 DEBUG [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(736): Instantiated ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e. 2016-08-18 15:25:11,529 DEBUG [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(1419): Closing ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e.: disabling compactions & flushes 2016-08-18 15:25:11,529 DEBUG [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(1446): Updates disabled for region ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e. 2016-08-18 15:25:11,529 INFO [RegionOpenAndInitThread-ns4:table4_restore-1] regionserver.HRegion(1552): Closed ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e. 2016-08-18 15:25:11,636 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e."} 2016-08-18 15:25:11,638 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:11,639 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 15:25:11,711 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-18 15:25:11,745 INFO [ProcedureExecutor-0] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,63282,1471559038490 2016-08-18 15:25:11,746 ERROR [ProcedureExecutor-0] master.TableStateManager(134): Unable to get table ns4:table4_restore state org.apache.hadoop.hbase.TableNotFoundException: ns4:table4_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:127) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:57) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 15:25:11,747 INFO [ProcedureExecutor-0] master.RegionStates(1106): Transition {5f33ce9d76378cebce2b8fb0a44fa79e state=OFFLINE, ts=1471559111745, server=null} to {5f33ce9d76378cebce2b8fb0a44fa79e state=PENDING_OPEN, ts=1471559111747, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:11,747 INFO [ProcedureExecutor-0] master.RegionStateStore(207): Updating hbase:meta row ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e. with state=PENDING_OPEN, sn=10.22.9.171,63282,1471559038490 2016-08-18 15:25:11,747 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:11,749 INFO [PriorityRpcServer.handler=2,queue=0,port=63282] regionserver.RSRpcServices(1666): Open ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e. 2016-08-18 15:25:11,754 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] regionserver.HRegion(6339): Opening region: {ENCODED => 5f33ce9d76378cebce2b8fb0a44fa79e, NAME => 'ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e.', STARTKEY => '', ENDKEY => ''} 2016-08-18 15:25:11,755 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table4_restore 5f33ce9d76378cebce2b8fb0a44fa79e 2016-08-18 15:25:11,755 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] regionserver.HRegion(736): Instantiated ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e. 2016-08-18 15:25:11,758 INFO [StoreOpener-5f33ce9d76378cebce2b8fb0a44fa79e-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=2, currentSize=1079848, freeSize=1042882456, maxSize=1043962304, heapSize=1079848, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 15:25:11,759 INFO [StoreOpener-5f33ce9d76378cebce2b8fb0a44fa79e-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 15:25:11,760 DEBUG [StoreOpener-5f33ce9d76378cebce2b8fb0a44fa79e-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns4/table4_restore/5f33ce9d76378cebce2b8fb0a44fa79e/f 2016-08-18 15:25:11,761 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns4/table4_restore/5f33ce9d76378cebce2b8fb0a44fa79e 2016-08-18 15:25:11,766 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns4/table4_restore/5f33ce9d76378cebce2b8fb0a44fa79e/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 15:25:11,766 INFO [RS_OPEN_REGION-10.22.9.171:63282-2] regionserver.HRegion(871): Onlined 5f33ce9d76378cebce2b8fb0a44fa79e; next sequenceid=2 2016-08-18 15:25:11,766 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 2016-08-18 15:25:11,767 INFO [PostOpenDeployTasks:5f33ce9d76378cebce2b8fb0a44fa79e] regionserver.HRegionServer(1952): Post open deploy tasks for ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e. 2016-08-18 15:25:11,767 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] master.AssignmentManager(2884): Got transition OPENED for {5f33ce9d76378cebce2b8fb0a44fa79e state=PENDING_OPEN, ts=1471559111747, server=10.22.9.171,63282,1471559038490} from 10.22.9.171,63282,1471559038490 2016-08-18 15:25:11,768 INFO [B.defaultRpcServer.handler=1,queue=0,port=63280] master.RegionStates(1106): Transition {5f33ce9d76378cebce2b8fb0a44fa79e state=PENDING_OPEN, ts=1471559111747, server=10.22.9.171,63282,1471559038490} to {5f33ce9d76378cebce2b8fb0a44fa79e state=OPEN, ts=1471559111768, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:11,768 INFO [B.defaultRpcServer.handler=1,queue=0,port=63280] master.RegionStateStore(207): Updating hbase:meta row ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e. with state=OPEN, openSeqNum=2, server=10.22.9.171,63282,1471559038490 2016-08-18 15:25:11,768 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:11,769 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] master.RegionStates(452): Onlined 5f33ce9d76378cebce2b8fb0a44fa79e on 10.22.9.171,63282,1471559038490 2016-08-18 15:25:11,769 DEBUG [ProcedureExecutor-0] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,63282,1471559038490 2016-08-18 15:25:11,769 DEBUG [ProcedureExecutor-0] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471559111769,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns4:table4_restore"} 2016-08-18 15:25:11,769 ERROR [B.defaultRpcServer.handler=1,queue=0,port=63280] master.TableStateManager(134): Unable to get table ns4:table4_restore state org.apache.hadoop.hbase.TableNotFoundException: ns4:table4_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 15:25:11,773 DEBUG [PostOpenDeployTasks:5f33ce9d76378cebce2b8fb0a44fa79e] regionserver.HRegionServer(1979): Finished post open deploy task for ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e. 2016-08-18 15:25:11,773 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] handler.OpenRegionHandler(126): Opened ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e. on 10.22.9.171,63282,1471559038490 2016-08-18 15:25:11,773 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:11,774 INFO [ProcedureExecutor-0] hbase.MetaTableAccessor(1700): Updated table ns4:table4_restore state to ENABLED in META 2016-08-18 15:25:12,018 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-18 15:25:12,105 DEBUG [ProcedureExecutor-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns4:table4_restore/write-master:632800000000000 2016-08-18 15:25:12,105 DEBUG [ProcedureExecutor-0] procedure2.ProcedureExecutor(870): Procedure completed in 705msec: CreateTableProcedure (table=ns4:table4_restore) id=18 owner=tyu state=FINISHED 2016-08-18 15:25:12,263 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-18 15:25:12,525 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=18 2016-08-18 15:25:12,526 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: CREATE, Table Name: ns4:table4_restore completed 2016-08-18 15:25:12,526 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:25:12,526 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e731001c 2016-08-18 15:25:12,529 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:25:12,530 INFO [main] impl.RestoreClientImpl(292): ns4:test-14715590609533 has been successfully restored to ns4:table4_restore 2016-08-18 15:25:12,531 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63546 because read count=-1. Number of active connections: 11 2016-08-18 15:25:12,531 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel$8(566): IPC Client (1607253960) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:12,531 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-18 15:25:12,531 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471559069358 hdfs://localhost:63272/backupUT/backup_1471559069358/ns4/test-14715590609533/ 2016-08-18 15:25:12,531 DEBUG [main] impl.RestoreClientImpl(234): restoreStage finished 2016-08-18 15:25:12,531 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel$8(566): IPC Client (-1842776097) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:12,531 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63545 because read count=-1. Number of active connections: 11 2016-08-18 15:25:12,531 INFO [main] impl.RestoreClientImpl(108): Restore for [ns1:test-1471559060953, ns2:test-14715590609531, ns3:test-14715590609532, ns4:test-14715590609533] are successful! 2016-08-18 15:25:12,573 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:63272/backupUT/backup_1471559097949/ns1/test-1471559060953/.backup.manifest 2016-08-18 15:25:12,576 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471559097949 2016-08-18 15:25:12,576 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471559097949/ns1/test-1471559060953/.backup.manifest 2016-08-18 15:25:12,577 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:63272/backupUT/backup_1471559097949/ns2/test-14715590609531/.backup.manifest 2016-08-18 15:25:12,579 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471559097949 2016-08-18 15:25:12,580 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471559097949/ns2/test-14715590609531/.backup.manifest 2016-08-18 15:25:12,580 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:63272/backupUT/backup_1471559097949/ns3/test-14715590609532/.backup.manifest 2016-08-18 15:25:12,583 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471559097949 2016-08-18 15:25:12,583 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471559097949/ns3/test-14715590609532/.backup.manifest 2016-08-18 15:25:12,584 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xadf2afb connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:12,588 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0xadf2afb0x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:12,589 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c141c1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:12,589 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:12,589 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:12,590 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0xadf2afb-0x1569fc0e731001d connected 2016-08-18 15:25:12,594 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:12,594 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63553; # active connections: 10 2016-08-18 15:25:12,595 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:12,595 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63553 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:12,603 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e731001d 2016-08-18 15:25:12,604 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:25:12,605 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-18 15:25:12,605 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (-2097666042) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:12,605 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63553 because read count=-1. Number of active connections: 10 2016-08-18 15:25:12,606 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:63272/backupUT/backup_1471559069358/ns1/test-1471559060953/.backup.manifest 2016-08-18 15:25:12,609 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471559069358 2016-08-18 15:25:12,609 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471559069358/ns1/test-1471559060953/.backup.manifest 2016-08-18 15:25:12,609 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns1:test-1471559060953' to 'ns1:table1_restore' from full backup image hdfs://localhost:63272/backupUT/backup_1471559069358/ns1/test-1471559060953 2016-08-18 15:25:12,618 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x308c8323 connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:12,621 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x308c83230x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:12,622 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@411cd51b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:12,622 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:12,622 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:12,623 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x308c8323-0x1569fc0e731001e connected 2016-08-18 15:25:12,624 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:12,624 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63557; # active connections: 10 2016-08-18 15:25:12,625 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:12,625 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63557 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:12,626 INFO [main] util.RestoreServerUtil(585): Truncating exising target table 'ns1:table1_restore', preserving region splits 2016-08-18 15:25:12,628 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:25:12,628 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63558; # active connections: 11 2016-08-18 15:25:12,629 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:12,629 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63558 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:12,629 INFO [main] client.HBaseAdmin$10(780): Started disable of ns1:table1_restore 2016-08-18 15:25:12,633 INFO [B.defaultRpcServer.handler=0,queue=0,port=63280] master.HMaster(1986): Client=tyu//10.22.9.171 disable ns1:table1_restore 2016-08-18 15:25:12,747 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] procedure2.ProcedureExecutor(669): Procedure DisableTableProcedure (table=ns1:table1_restore) id=19 owner=tyu state=RUNNABLE:DISABLE_TABLE_PREPARE added to the store. 2016-08-18 15:25:12,750 DEBUG [ProcedureExecutor-2] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns1:table1_restore/write-master:632800000000001 2016-08-18 15:25:12,752 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-18 15:25:12,858 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-18 15:25:12,966 DEBUG [ProcedureExecutor-2] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471559112966,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:table1_restore"} 2016-08-18 15:25:12,967 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:12,969 INFO [ProcedureExecutor-2] hbase.MetaTableAccessor(1700): Updated table ns1:table1_restore state to DISABLING in META 2016-08-18 15:25:13,061 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-18 15:25:13,076 INFO [ProcedureExecutor-2] procedure.DisableTableProcedure(395): Offlining 1 regions. 2016-08-18 15:25:13,081 DEBUG [10.22.9.171,63280,1471559038246-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(1352): Starting unassign of ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. (offlining), current state: {854a47f76da7ac7120b78cba57ef767c state=OPEN, ts=1471559106770, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:13,081 INFO [10.22.9.171,63280,1471559038246-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStates(1106): Transition {854a47f76da7ac7120b78cba57ef767c state=OPEN, ts=1471559106770, server=10.22.9.171,63282,1471559038490} to {854a47f76da7ac7120b78cba57ef767c state=PENDING_CLOSE, ts=1471559113081, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:13,081 INFO [10.22.9.171,63280,1471559038246-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. with state=PENDING_CLOSE 2016-08-18 15:25:13,081 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:13,085 INFO [PriorityRpcServer.handler=4,queue=0,port=63282] regionserver.RSRpcServices(1314): Close 854a47f76da7ac7120b78cba57ef767c, moving to null 2016-08-18 15:25:13,086 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-0] handler.CloseRegionHandler(90): Processing close of ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:13,086 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-0] regionserver.HRegion(1419): Closing ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c.: disabling compactions & flushes 2016-08-18 15:25:13,086 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-0] regionserver.HRegion(1446): Updates disabled for region ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:13,088 DEBUG [10.22.9.171,63280,1471559038246-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(930): Sent CLOSE to 10.22.9.171,63282,1471559038490 for region ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:13,088 INFO [StoreCloserThread-ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c.-1] regionserver.HStore(839): Closed f 2016-08-18 15:25:13,088 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559098118 2016-08-18 15:25:13,093 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/recovered.edits/6.seqid to file, newSeqId=6, maxSeqId=2 2016-08-18 15:25:13,096 INFO [RS_CLOSE_REGION-10.22.9.171:63282-0] regionserver.HRegion(1552): Closed ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:13,097 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] master.AssignmentManager(2884): Got transition CLOSED for {854a47f76da7ac7120b78cba57ef767c state=PENDING_CLOSE, ts=1471559113081, server=10.22.9.171,63282,1471559038490} from 10.22.9.171,63282,1471559038490 2016-08-18 15:25:13,098 INFO [B.defaultRpcServer.handler=2,queue=0,port=63280] master.RegionStates(1106): Transition {854a47f76da7ac7120b78cba57ef767c state=PENDING_CLOSE, ts=1471559113081, server=10.22.9.171,63282,1471559038490} to {854a47f76da7ac7120b78cba57ef767c state=OFFLINE, ts=1471559113098, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:13,098 INFO [B.defaultRpcServer.handler=2,queue=0,port=63280] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. with state=OFFLINE 2016-08-18 15:25:13,098 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:13,099 INFO [B.defaultRpcServer.handler=2,queue=0,port=63280] master.RegionStates(590): Offlined 854a47f76da7ac7120b78cba57ef767c from 10.22.9.171,63282,1471559038490 2016-08-18 15:25:13,099 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-0] handler.CloseRegionHandler(122): Closed ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:13,235 DEBUG [ProcedureExecutor-2] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471559113235,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:table1_restore"} 2016-08-18 15:25:13,237 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:13,238 INFO [ProcedureExecutor-2] hbase.MetaTableAccessor(1700): Updated table ns1:table1_restore state to DISABLED in META 2016-08-18 15:25:13,238 INFO [ProcedureExecutor-2] procedure.DisableTableProcedure(424): Disabled table, ns1:table1_restore, is completed. 2016-08-18 15:25:13,368 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-18 15:25:13,455 DEBUG [ProcedureExecutor-2] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns1:table1_restore/write-master:632800000000001 2016-08-18 15:25:13,455 DEBUG [ProcedureExecutor-2] procedure2.ProcedureExecutor(870): Procedure completed in 712msec: DisableTableProcedure (table=ns1:table1_restore) id=19 owner=tyu state=FINISHED 2016-08-18 15:25:13,874 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=19 2016-08-18 15:25:13,875 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: DISABLE, Table Name: ns1:table1_restore completed 2016-08-18 15:25:13,877 INFO [main] client.HBaseAdmin$8(615): Started truncating ns1:table1_restore 2016-08-18 15:25:13,882 INFO [B.defaultRpcServer.handler=3,queue=0,port=63280] master.HMaster(1848): Client=tyu//10.22.9.171 truncate ns1:table1_restore 2016-08-18 15:25:13,997 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63280] procedure2.ProcedureExecutor(669): Procedure TruncateTableProcedure (table=ns1:table1_restore preserveSplits=true) id=20 owner=tyu state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 15:25:14,000 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns1:table1_restore/write-master:632800000000002 2016-08-18 15:25:14,002 DEBUG [ProcedureExecutor-3] procedure.TruncateTableProcedure(87): waiting for 'ns1:table1_restore' regions in transition 2016-08-18 15:25:14,115 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"info":[{"timestamp":1471559114114,"tag":[],"qualifier":"","vlen":0}]},"row":"ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c."} 2016-08-18 15:25:14,116 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:14,117 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1854): Deleted [{ENCODED => 854a47f76da7ac7120b78cba57ef767c, NAME => 'ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c.', STARTKEY => '', ENDKEY => ''}] 2016-08-18 15:25:14,121 DEBUG [ProcedureExecutor-3] procedure.DeleteTableProcedure(408): Removing 'ns1:table1_restore' from region states. 2016-08-18 15:25:14,121 DEBUG [ProcedureExecutor-3] procedure.DeleteTableProcedure(412): Marking 'ns1:table1_restore' as deleted. 2016-08-18 15:25:14,122 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"table":[{"timestamp":1471559114122,"tag":[],"qualifier":"state","vlen":0}]},"row":"ns1:table1_restore"} 2016-08-18 15:25:14,122 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:14,123 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1726): Deleted table ns1:table1_restore state from META 2016-08-18 15:25:14,233 DEBUG [ProcedureExecutor-3] procedure.DeleteTableProcedure(340): Archiving region ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. from FS 2016-08-18 15:25:14,236 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(93): ARCHIVING hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c 2016-08-18 15:25:14,241 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(134): Archiving [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/f, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/recovered.edits] 2016-08-18 15:25:14,249 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/f/e6998b28ec6e4f8cbe153a46906e710c_SeqId_4_, to hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/archive/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/f/e6998b28ec6e4f8cbe153a46906e710c_SeqId_4_ 2016-08-18 15:25:14,255 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/recovered.edits/6.seqid, to hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/archive/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/recovered.edits/6.seqid 2016-08-18 15:25:14,255 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741913_1089 127.0.0.1:63273 2016-08-18 15:25:14,256 DEBUG [ProcedureExecutor-3] backup.HFileArchiver(453): Deleted all region files in: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c 2016-08-18 15:25:14,256 DEBUG [ProcedureExecutor-3] procedure.DeleteTableProcedure(344): Table 'ns1:table1_restore' archived! 2016-08-18 15:25:14,258 INFO [IPC Server handler 9 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741912_1088 127.0.0.1:63273 2016-08-18 15:25:14,374 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741922_1098{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:14,377 DEBUG [ProcedureExecutor-3] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns1/table1_restore/.tabledesc/.tableinfo.0000000001 2016-08-18 15:25:14,378 INFO [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(6162): creating HRegion ns1:table1_restore HTD == 'ns1:table1_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp Table name == ns1:table1_restore 2016-08-18 15:25:14,386 INFO [IPC Server handler 3 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741923_1099{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:14,387 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(736): Instantiated ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:14,390 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1419): Closing ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c.: disabling compactions & flushes 2016-08-18 15:25:14,391 DEBUG [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1446): Updates disabled for region ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:14,391 INFO [RegionOpenAndInitThread-ns1:table1_restore-1] regionserver.HRegion(1552): Closed ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:14,502 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c."} 2016-08-18 15:25:14,504 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:14,505 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 15:25:14,541 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2fc1fd9c] blockmanagement.BlockManager(3482): BLOCK* BlockManager: ask 127.0.0.1:63273 to delete [blk_1073741912_1088, blk_1073741913_1089] 2016-08-18 15:25:14,612 INFO [ProcedureExecutor-3] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,63282,1471559038490 2016-08-18 15:25:14,613 ERROR [ProcedureExecutor-3] master.TableStateManager(134): Unable to get table ns1:table1_restore state org.apache.hadoop.hbase.TableNotFoundException: ns1:table1_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:122) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:47) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 15:25:14,614 INFO [ProcedureExecutor-3] master.RegionStates(1106): Transition {854a47f76da7ac7120b78cba57ef767c state=OFFLINE, ts=1471559114612, server=null} to {854a47f76da7ac7120b78cba57ef767c state=PENDING_OPEN, ts=1471559114614, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:14,614 INFO [ProcedureExecutor-3] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. with state=PENDING_OPEN, sn=10.22.9.171,63282,1471559038490 2016-08-18 15:25:14,615 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:14,616 INFO [PriorityRpcServer.handler=3,queue=1,port=63282] regionserver.RSRpcServices(1666): Open ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:14,621 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-0] regionserver.HRegion(6339): Opening region: {ENCODED => 854a47f76da7ac7120b78cba57ef767c, NAME => 'ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c.', STARTKEY => '', ENDKEY => ''} 2016-08-18 15:25:14,621 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-0] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table1_restore 854a47f76da7ac7120b78cba57ef767c 2016-08-18 15:25:14,622 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-0] regionserver.HRegion(736): Instantiated ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:14,624 INFO [StoreOpener-854a47f76da7ac7120b78cba57ef767c-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1087976, freeSize=1042874328, maxSize=1043962304, heapSize=1087976, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 15:25:14,625 INFO [StoreOpener-854a47f76da7ac7120b78cba57ef767c-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 15:25:14,625 DEBUG [StoreOpener-854a47f76da7ac7120b78cba57ef767c-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/f 2016-08-18 15:25:14,626 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-0] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c 2016-08-18 15:25:14,631 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 15:25:14,631 INFO [RS_OPEN_REGION-10.22.9.171:63282-0] regionserver.HRegion(871): Onlined 854a47f76da7ac7120b78cba57ef767c; next sequenceid=2 2016-08-18 15:25:14,632 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559098118 2016-08-18 15:25:14,632 INFO [PostOpenDeployTasks:854a47f76da7ac7120b78cba57ef767c] regionserver.HRegionServer(1952): Post open deploy tasks for ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:14,633 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63280] master.AssignmentManager(2884): Got transition OPENED for {854a47f76da7ac7120b78cba57ef767c state=PENDING_OPEN, ts=1471559114614, server=10.22.9.171,63282,1471559038490} from 10.22.9.171,63282,1471559038490 2016-08-18 15:25:14,633 INFO [B.defaultRpcServer.handler=4,queue=0,port=63280] master.RegionStates(1106): Transition {854a47f76da7ac7120b78cba57ef767c state=PENDING_OPEN, ts=1471559114614, server=10.22.9.171,63282,1471559038490} to {854a47f76da7ac7120b78cba57ef767c state=OPEN, ts=1471559114633, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:14,633 INFO [B.defaultRpcServer.handler=4,queue=0,port=63280] master.RegionStateStore(207): Updating hbase:meta row ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. with state=OPEN, openSeqNum=2, server=10.22.9.171,63282,1471559038490 2016-08-18 15:25:14,634 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:14,634 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63280] master.RegionStates(452): Onlined 854a47f76da7ac7120b78cba57ef767c on 10.22.9.171,63282,1471559038490 2016-08-18 15:25:14,635 DEBUG [ProcedureExecutor-3] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,63282,1471559038490 2016-08-18 15:25:14,635 DEBUG [ProcedureExecutor-3] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471559114635,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns1:table1_restore"} 2016-08-18 15:25:14,635 ERROR [B.defaultRpcServer.handler=4,queue=0,port=63280] master.TableStateManager(134): Unable to get table ns1:table1_restore state org.apache.hadoop.hbase.TableNotFoundException: ns1:table1_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 15:25:14,635 DEBUG [PostOpenDeployTasks:854a47f76da7ac7120b78cba57ef767c] regionserver.HRegionServer(1979): Finished post open deploy task for ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:14,636 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:14,636 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-0] handler.OpenRegionHandler(126): Opened ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. on 10.22.9.171,63282,1471559038490 2016-08-18 15:25:14,637 INFO [ProcedureExecutor-3] hbase.MetaTableAccessor(1700): Updated table ns1:table1_restore state to ENABLED in META 2016-08-18 15:25:14,746 DEBUG [ProcedureExecutor-3] procedure.TruncateTableProcedure(129): truncate 'ns1:table1_restore' completed 2016-08-18 15:25:14,857 DEBUG [ProcedureExecutor-3] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns1:table1_restore/write-master:632800000000002 2016-08-18 15:25:14,857 DEBUG [ProcedureExecutor-3] procedure2.ProcedureExecutor(870): Procedure completed in 863msec: TruncateTableProcedure (table=ns1:table1_restore preserveSplits=true) id=20 owner=tyu state=FINISHED 2016-08-18 15:25:15,011 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=20 2016-08-18 15:25:15,012 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: TRUNCATE, Table Name: ns1:table1_restore completed 2016-08-18 15:25:15,012 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:25:15,012 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e731001e 2016-08-18 15:25:15,013 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:25:15,014 DEBUG [main] util.RestoreServerUtil(255): cluster hold the backup image: hdfs://localhost:63272; local cluster node: hdfs://localhost:63272 2016-08-18 15:25:15,014 DEBUG [main] util.RestoreServerUtil(261): File hdfs://localhost:63272/backupUT/backup_1471559069358/ns1/test-1471559060953/archive/data/ns1/test-1471559060953 on local cluster, back it up before restore 2016-08-18 15:25:15,014 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (855984905) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:15,014 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63557 because read count=-1. Number of active connections: 11 2016-08-18 15:25:15,014 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel$8(566): IPC Client (-2137195367) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:15,014 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63558 because read count=-1. Number of active connections: 11 2016-08-18 15:25:15,034 INFO [IPC Server handler 9 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741924_1100{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:15,035 DEBUG [main] util.RestoreServerUtil(271): Copied to temporary path on local cluster: /user/tyu/hbase-staging/restore 2016-08-18 15:25:15,035 DEBUG [main] util.RestoreServerUtil(355): TableArchivePath for bulkload using tempPath: /user/tyu/hbase-staging/restore 2016-08-18 15:25:15,051 DEBUG [main] util.RestoreServerUtil(363): Restoring HFiles from directory hdfs://localhost:63272/user/tyu/hbase-staging/restore/c61c3bf2f83c0b95289129ff052b32c3 2016-08-18 15:25:15,051 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x14476c14 connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:15,054 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x14476c140x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:15,055 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1812f7b3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:15,055 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:15,056 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:15,056 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x14476c14-0x1569fc0e731001f connected 2016-08-18 15:25:15,058 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:15,058 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63563; # active connections: 10 2016-08-18 15:25:15,059 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:15,059 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63563 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:15,066 DEBUG [main] client.ConnectionImplementation(604): Table ns1:table1_restore should be available 2016-08-18 15:25:15,072 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:25:15,072 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63564; # active connections: 11 2016-08-18 15:25:15,072 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:15,073 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63564 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:15,077 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1087976, freeSize=1042874328, maxSize=1043962304, heapSize=1087976, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 15:25:15,081 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:63272/user/tyu/hbase-staging/restore/c61c3bf2f83c0b95289129ff052b32c3/f/99d6d04705a54ac7971e0c1e430a2855 first=row0 last=row98 2016-08-18 15:25:15,085 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c., hostname=10.22.9.171,63282,1471559038490, seqNum=2 for row with hfile group [{[B@3395e90d,hdfs://localhost:63272/user/tyu/hbase-staging/restore/c61c3bf2f83c0b95289129ff052b32c3/f/99d6d04705a54ac7971e0c1e430a2855}] 2016-08-18 15:25:15,087 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:15,087 DEBUG [RpcServer.listener,port=63282] ipc.RpcServer$Listener(880): RpcServer.listener,port=63282: connection from 10.22.9.171:63565; # active connections: 7 2016-08-18 15:25:15,087 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:15,087 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63565 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:15,088 INFO [B.defaultRpcServer.handler=2,queue=0,port=63282] regionserver.HStore(670): Validating hfile at hdfs://localhost:63272/user/tyu/hbase-staging/restore/c61c3bf2f83c0b95289129ff052b32c3/f/99d6d04705a54ac7971e0c1e430a2855 for inclusion in store f region ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:15,091 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63282] regionserver.HStore(682): HFile bounds: first=row0 last=row98 2016-08-18 15:25:15,091 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63282] regionserver.HStore(684): Region bounds: first= last= 2016-08-18 15:25:15,093 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63282] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:63272/user/tyu/hbase-staging/restore/c61c3bf2f83c0b95289129ff052b32c3/f/99d6d04705a54ac7971e0c1e430a2855 as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/f/34ccbf8d94054ebfbdc51a7fc2c02a8f_SeqId_4_ 2016-08-18 15:25:15,094 INFO [B.defaultRpcServer.handler=2,queue=0,port=63282] regionserver.HStore(742): Loaded HFile hdfs://localhost:63272/user/tyu/hbase-staging/restore/c61c3bf2f83c0b95289129ff052b32c3/f/99d6d04705a54ac7971e0c1e430a2855 into store 'f' as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/f/34ccbf8d94054ebfbdc51a7fc2c02a8f_SeqId_4_ - updating store file list. 2016-08-18 15:25:15,099 INFO [B.defaultRpcServer.handler=2,queue=0,port=63282] regionserver.HStore(777): Loaded HFile hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/f/34ccbf8d94054ebfbdc51a7fc2c02a8f_SeqId_4_ into store 'f 2016-08-18 15:25:15,099 INFO [B.defaultRpcServer.handler=2,queue=0,port=63282] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:63272/user/tyu/hbase-staging/restore/c61c3bf2f83c0b95289129ff052b32c3/f/99d6d04705a54ac7971e0c1e430a2855 into store f (new location: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/f/34ccbf8d94054ebfbdc51a7fc2c02a8f_SeqId_4_) 2016-08-18 15:25:15,099 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559098118 2016-08-18 15:25:15,100 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:25:15,101 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e731001f 2016-08-18 15:25:15,103 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:25:15,104 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Listener(912): RpcServer.listener,port=63282: DISCONNECTING client 10.22.9.171:63565 because read count=-1. Number of active connections: 7 2016-08-18 15:25:15,104 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel$8(566): IPC Client (1664186848) to /10.22.9.171:63282 from tyu: closed 2016-08-18 15:25:15,104 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel$8(566): IPC Client (857189413) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:15,104 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel$8(566): IPC Client (-1631023106) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:15,104 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63564 because read count=-1. Number of active connections: 11 2016-08-18 15:25:15,104 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63563 because read count=-1. Number of active connections: 11 2016-08-18 15:25:15,106 INFO [main] impl.RestoreClientImpl(284): Restoring 'ns1:test-1471559060953' to 'ns1:table1_restore' from log dirs: hdfs://localhost:63272/backupUT/backup_1471559097949/WALs 2016-08-18 15:25:15,106 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2acecc33 connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:15,108 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x2acecc330x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:15,109 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@49135394, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:15,109 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:15,109 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:15,110 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x2acecc33-0x1569fc0e7310020 connected 2016-08-18 15:25:15,111 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:15,111 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63567; # active connections: 10 2016-08-18 15:25:15,112 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:15,112 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63567 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:15,118 INFO [main] mapreduce.MapReduceRestoreService(75): Restore incremental backup from directory hdfs://localhost:63272/backupUT/backup_1471559097949/WALs from hbase tables ,ns1:test-1471559060953 to tables ,ns1:table1_restore 2016-08-18 15:25:15,118 INFO [main] mapreduce.MapReduceRestoreService(80): Restore ns1:test-1471559060953 into ns1:table1_restore 2016-08-18 15:25:15,122 DEBUG [main] mapreduce.WALPlayer(307): add incremental job :/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471559115118 from hdfs://localhost:63272/backupUT/backup_1471559097949/WALs to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471559115118 2016-08-18 15:25:15,125 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x146ec824 connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:15,127 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x146ec8240x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:15,128 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@669bef85, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:15,128 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:15,128 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:15,129 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x146ec824-0x1569fc0e7310021 connected 2016-08-18 15:25:15,130 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:25:15,131 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63569; # active connections: 11 2016-08-18 15:25:15,131 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:15,131 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63569 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:15,137 INFO [main] mapreduce.HFileOutputFormat2(478): bulkload locality sensitive enabled 2016-08-18 15:25:15,137 INFO [main] mapreduce.HFileOutputFormat2(483): Looking up current regions for table ns1:test-1471559060953 2016-08-18 15:25:15,141 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:15,141 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63570; # active connections: 12 2016-08-18 15:25:15,141 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:15,142 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63570 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:15,145 INFO [main] mapreduce.HFileOutputFormat2(485): Configuring 1 reduce partitions to match current region count 2016-08-18 15:25:15,146 INFO [main] mapreduce.HFileOutputFormat2(378): Writing partition information to /user/tyu/hbase-staging/partitions_7de26802-ac91-42e2-9d67-d34f32bf4441 2016-08-18 15:25:15,157 INFO [IPC Server handler 5 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741925_1101{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:15,161 WARN [main] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-18 15:25:15,356 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-3439253223190702720.jar 2016-08-18 15:25:16,545 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-3551711677191251819.jar 2016-08-18 15:25:16,932 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-7671794390045483558.jar 2016-08-18 15:25:16,953 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-4143616571044364191.jar 2016-08-18 15:25:17,264 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-18 15:25:18,155 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-4742881047285646483.jar 2016-08-18 15:25:18,156 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-18 15:25:18,156 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-18 15:25:18,156 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-18 15:25:18,157 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 15:25:18,157 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-18 15:25:18,157 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-18 15:25:18,367 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-8556811873808508437.jar 2016-08-18 15:25:18,368 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-8556811873808508437.jar 2016-08-18 15:25:19,553 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.WALInputFormat, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-4869214582784883617.jar 2016-08-18 15:25:19,553 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-8556811873808508437.jar 2016-08-18 15:25:19,554 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-8556811873808508437.jar 2016-08-18 15:25:19,554 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-4869214582784883617.jar 2016-08-18 15:25:19,554 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.2/hadoop-mapreduce-client-core-2.7.2.jar 2016-08-18 15:25:19,554 INFO [main] mapreduce.HFileOutputFormat2(498): Incremental table ns1:test-1471559060953 output configured. 2016-08-18 15:25:19,555 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:25:19,555 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310021 2016-08-18 15:25:19,555 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:25:19,556 DEBUG [main] mapreduce.WALPlayer(325): success configuring load incremental job 2016-08-18 15:25:19,556 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel$8(566): IPC Client (2082537995) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:19,556 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63570 because read count=-1. Number of active connections: 12 2016-08-18 15:25:19,556 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63569 because read count=-1. Number of active connections: 12 2016-08-18 15:25:19,556 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel$8(566): IPC Client (657255928) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:19,557 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.base.Preconditions, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 15:25:19,694 WARN [main] mapreduce.JobResourceUploader(64): Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-08-18 15:25:19,707 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741926_1102{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:19,719 INFO [IPC Server handler 5 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741927_1103{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:19,733 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741928_1104{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:19,741 INFO [IPC Server handler 1 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741929_1105{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:19,749 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741930_1106{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:19,771 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741931_1107{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:19,782 INFO [IPC Server handler 5 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741932_1108{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:19,791 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741933_1109{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:19,808 INFO [IPC Server handler 1 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741934_1110{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:19,819 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741935_1111{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:19,827 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741936_1112{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:19,833 INFO [IPC Server handler 5 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741937_1113{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:19,843 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741938_1114{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:19,862 INFO [IPC Server handler 1 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741939_1115{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:19,863 WARN [main] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-18 15:25:19,877 DEBUG [main] mapreduce.WALInputFormat(265): Scanning hdfs://localhost:63272/backupUT/backup_1471559097949/WALs for WAL files 2016-08-18 15:25:19,880 WARN [main] mapreduce.WALInputFormat(289): File hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/.backup.manifest does not appear to be an WAL file. Skipping... 2016-08-18 15:25:19,880 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471559100571; access_time=1471559100563; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:25:19,881 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559040116; isDirectory=false; length=981; replication=1; blocksize=134217728; modification_time=1471559100104; access_time=1471559100092; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:25:19,881 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471559100589; access_time=1471559100580; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:25:19,881 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471559100612; access_time=1471559100599; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:25:19,881 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559042158; isDirectory=false; length=1629; replication=1; blocksize=134217728; modification_time=1471559100532; access_time=1471559100120; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:25:19,881 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559064398; isDirectory=false; length=10957; replication=1; blocksize=134217728; modification_time=1471559100630; access_time=1471559100622; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:25:19,881 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577; isDirectory=false; length=11592; replication=1; blocksize=134217728; modification_time=1471559100553; access_time=1471559100545; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:25:19,881 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559065843; isDirectory=false; length=11059; replication=1; blocksize=134217728; modification_time=1471559100646; access_time=1471559100638; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:25:19,891 INFO [IPC Server handler 9 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741940_1116{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:19,898 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741941_1117{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:19,911 INFO [IPC Server handler 9 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741942_1118{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:20,116 WARN [ResourceManager Event Processor] capacity.LeafQueue(610): maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-18 15:25:20,116 WARN [ResourceManager Event Processor] capacity.LeafQueue(631): maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-18 15:25:20,367 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:25,480 INFO [Socket Reader #1 for port 63350] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:25,749 INFO [IPC Server handler 3 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741943_1119{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:27,741 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:27,741 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:28,601 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:28,601 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:29,617 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:30,625 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:32,531 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:32,579 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0001_01_000002 is : 143 2016-08-18 15:25:33,656 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:34,407 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:34,430 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0001_01_000004 is : 143 2016-08-18 15:25:34,970 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:34,992 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0001_01_000003 is : 143 2016-08-18 15:25:35,054 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:35,077 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0001_01_000005 is : 143 2016-08-18 15:25:35,593 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:35,614 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0001_01_000006 is : 143 2016-08-18 15:25:35,686 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:36,019 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:36,035 WARN [ContainersLauncher #3] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0001_01_000007 is : 143 2016-08-18 15:25:36,704 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:37,670 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:37,687 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0001_01_000008 is : 143 2016-08-18 15:25:38,838 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:38,853 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0001_01_000009 is : 143 2016-08-18 15:25:41,364 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63667; # active connections: 11 2016-08-18 15:25:41,736 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:41,736 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63667 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:41,960 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63667 because read count=-1. Number of active connections: 11 2016-08-18 15:25:42,611 INFO [IPC Server handler 7 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741945_1121{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:42,639 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:42,655 WARN [ContainersLauncher #3] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0001_01_000010 is : 143 2016-08-18 15:25:42,693 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741944_1120{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 16357 2016-08-18 15:25:42,702 INFO [IPC Server handler 4 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741946_1122{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:42,722 INFO [IPC Server handler 4 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741947_1123{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:42,741 INFO [IPC Server handler 4 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741948_1124{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:43,768 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741940_1116 127.0.0.1:63273 2016-08-18 15:25:43,768 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741941_1117 127.0.0.1:63273 2016-08-18 15:25:43,768 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741942_1118 127.0.0.1:63273 2016-08-18 15:25:43,768 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741944_1120 127.0.0.1:63273 2016-08-18 15:25:43,768 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741943_1119 127.0.0.1:63273 2016-08-18 15:25:43,769 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741938_1114 127.0.0.1:63273 2016-08-18 15:25:43,769 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741927_1103 127.0.0.1:63273 2016-08-18 15:25:43,769 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741939_1115 127.0.0.1:63273 2016-08-18 15:25:43,769 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741937_1113 127.0.0.1:63273 2016-08-18 15:25:43,769 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741934_1110 127.0.0.1:63273 2016-08-18 15:25:43,769 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741931_1107 127.0.0.1:63273 2016-08-18 15:25:43,769 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741928_1104 127.0.0.1:63273 2016-08-18 15:25:43,769 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741930_1106 127.0.0.1:63273 2016-08-18 15:25:43,770 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741932_1108 127.0.0.1:63273 2016-08-18 15:25:43,770 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741933_1109 127.0.0.1:63273 2016-08-18 15:25:43,770 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741929_1105 127.0.0.1:63273 2016-08-18 15:25:43,770 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741935_1111 127.0.0.1:63273 2016-08-18 15:25:43,770 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741926_1102 127.0.0.1:63273 2016-08-18 15:25:43,770 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741936_1112 127.0.0.1:63273 2016-08-18 15:25:44,564 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2fc1fd9c] blockmanagement.BlockManager(3482): BLOCK* BlockManager: ask 127.0.0.1:63273 to delete [blk_1073741926_1102, blk_1073741927_1103, blk_1073741928_1104, blk_1073741929_1105, blk_1073741930_1106, blk_1073741931_1107, blk_1073741932_1108, blk_1073741933_1109, blk_1073741934_1110, blk_1073741935_1111, blk_1073741936_1112, blk_1073741937_1113, blk_1073741938_1114, blk_1073741939_1115, blk_1073741940_1116, blk_1073741941_1117, blk_1073741942_1118, blk_1073741943_1119, blk_1073741944_1120] 2016-08-18 15:25:44,661 DEBUG [main] mapreduce.MapReduceRestoreService(101): Restoring HFiles from directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471559115118 2016-08-18 15:25:44,661 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3f2411a connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:44,667 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x3f2411a0x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:44,668 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@555673be, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:44,668 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:44,668 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:44,669 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x3f2411a-0x1569fc0e7310023 connected 2016-08-18 15:25:44,671 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:44,671 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63677; # active connections: 11 2016-08-18 15:25:44,672 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:44,672 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63677 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:44,678 DEBUG [main] client.ConnectionImplementation(604): Table ns1:table1_restore should be available 2016-08-18 15:25:44,681 WARN [main] mapreduce.LoadIncrementalHFiles(199): Skipping non-directory hdfs://localhost:63272/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471559115118/_SUCCESS 2016-08-18 15:25:44,687 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:25:44,687 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63680; # active connections: 12 2016-08-18 15:25:44,688 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:44,688 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63680 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:44,693 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1087976, freeSize=1042874328, maxSize=1043962304, heapSize=1087976, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 15:25:44,697 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:63272/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471559115118/f/e08884fd67a249138ca8e2a0cfeaaf8d first=row-t10 last=row98 2016-08-18 15:25:44,700 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c., hostname=10.22.9.171,63282,1471559038490, seqNum=2 for row with hfile group [{[B@49859504,hdfs://localhost:63272/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471559115118/f/e08884fd67a249138ca8e2a0cfeaaf8d}] 2016-08-18 15:25:44,701 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:44,701 DEBUG [RpcServer.listener,port=63282] ipc.RpcServer$Listener(880): RpcServer.listener,port=63282: connection from 10.22.9.171:63681; # active connections: 7 2016-08-18 15:25:44,702 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:44,702 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63681 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:44,702 INFO [B.defaultRpcServer.handler=4,queue=0,port=63282] regionserver.HStore(670): Validating hfile at hdfs://localhost:63272/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471559115118/f/e08884fd67a249138ca8e2a0cfeaaf8d for inclusion in store f region ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:25:44,707 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63282] regionserver.HStore(682): HFile bounds: first=row-t10 last=row98 2016-08-18 15:25:44,707 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63282] regionserver.HStore(684): Region bounds: first= last= 2016-08-18 15:25:44,709 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63282] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:63272/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471559115118/f/e08884fd67a249138ca8e2a0cfeaaf8d as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/f/0f314f62d6474684b99acce756105330_SeqId_6_ 2016-08-18 15:25:44,711 INFO [B.defaultRpcServer.handler=4,queue=0,port=63282] regionserver.HStore(742): Loaded HFile hdfs://localhost:63272/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471559115118/f/e08884fd67a249138ca8e2a0cfeaaf8d into store 'f' as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/f/0f314f62d6474684b99acce756105330_SeqId_6_ - updating store file list. 2016-08-18 15:25:44,717 INFO [B.defaultRpcServer.handler=4,queue=0,port=63282] regionserver.HStore(777): Loaded HFile hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/f/0f314f62d6474684b99acce756105330_SeqId_6_ into store 'f 2016-08-18 15:25:44,717 INFO [B.defaultRpcServer.handler=4,queue=0,port=63282] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:63272/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns1-table1_restore-1471559115118/f/e08884fd67a249138ca8e2a0cfeaaf8d into store f (new location: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/f/0f314f62d6474684b99acce756105330_SeqId_6_) 2016-08-18 15:25:44,718 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559098118 2016-08-18 15:25:44,719 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:25:44,720 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310023 2016-08-18 15:25:44,720 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:25:44,721 DEBUG [main] mapreduce.MapReduceRestoreService(113): Restore Job finished:0 2016-08-18 15:25:44,721 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Listener(912): RpcServer.listener,port=63282: DISCONNECTING client 10.22.9.171:63681 because read count=-1. Number of active connections: 7 2016-08-18 15:25:44,721 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310020 2016-08-18 15:25:44,721 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel$8(566): IPC Client (-978156257) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:44,721 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63677 because read count=-1. Number of active connections: 12 2016-08-18 15:25:44,721 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63680 because read count=-1. Number of active connections: 12 2016-08-18 15:25:44,721 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (1281558992) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:44,721 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (-1365080161) to /10.22.9.171:63282 from tyu: closed 2016-08-18 15:25:44,722 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:25:44,722 INFO [main] impl.RestoreClientImpl(292): ns1:test-1471559060953 has been successfully restored to ns1:table1_restore 2016-08-18 15:25:44,722 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-18 15:25:44,723 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471559069358 hdfs://localhost:63272/backupUT/backup_1471559069358/ns1/test-1471559060953/ 2016-08-18 15:25:44,723 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471559097949 hdfs://localhost:63272/backupUT/backup_1471559097949/ns1/test-1471559060953/ 2016-08-18 15:25:44,722 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63567 because read count=-1. Number of active connections: 10 2016-08-18 15:25:44,722 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel$8(566): IPC Client (1599000342) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:44,723 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-18 15:25:44,724 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:63272/backupUT/backup_1471559069358/ns2/test-14715590609531/.backup.manifest 2016-08-18 15:25:44,727 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471559069358 2016-08-18 15:25:44,727 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471559069358/ns2/test-14715590609531/.backup.manifest 2016-08-18 15:25:44,727 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns2:test-14715590609531' to 'ns2:table2_restore' from full backup image hdfs://localhost:63272/backupUT/backup_1471559069358/ns2/test-14715590609531 2016-08-18 15:25:44,736 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7ae4f90c connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:44,738 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x7ae4f90c0x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:44,739 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1d5171a6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:44,739 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:44,739 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:44,739 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x7ae4f90c-0x1569fc0e7310024 connected 2016-08-18 15:25:44,740 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:44,741 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63686; # active connections: 10 2016-08-18 15:25:44,741 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:44,741 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63686 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:44,742 INFO [main] util.RestoreServerUtil(585): Truncating exising target table 'ns2:table2_restore', preserving region splits 2016-08-18 15:25:44,743 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:25:44,743 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63687; # active connections: 11 2016-08-18 15:25:44,744 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:44,744 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63687 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:44,745 INFO [main] client.HBaseAdmin$10(780): Started disable of ns2:table2_restore 2016-08-18 15:25:44,745 INFO [B.defaultRpcServer.handler=3,queue=0,port=63280] master.HMaster(1986): Client=tyu//10.22.9.171 disable ns2:table2_restore 2016-08-18 15:25:44,854 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63280] procedure2.ProcedureExecutor(669): Procedure DisableTableProcedure (table=ns2:table2_restore) id=21 owner=tyu state=RUNNABLE:DISABLE_TABLE_PREPARE added to the store. 2016-08-18 15:25:44,857 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-18 15:25:44,858 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns2:table2_restore/write-master:632800000000001 2016-08-18 15:25:44,963 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-18 15:25:45,069 DEBUG [ProcedureExecutor-4] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471559145069,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:table2_restore"} 2016-08-18 15:25:45,070 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:45,071 INFO [ProcedureExecutor-4] hbase.MetaTableAccessor(1700): Updated table ns2:table2_restore state to DISABLING in META 2016-08-18 15:25:45,165 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-18 15:25:45,177 INFO [ProcedureExecutor-4] procedure.DisableTableProcedure(395): Offlining 1 regions. 2016-08-18 15:25:45,179 DEBUG [10.22.9.171,63280,1471559038246-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(1352): Starting unassign of ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. (offlining), current state: {bf3c6d412d1b40a1b33f3f2c30bb496a state=OPEN, ts=1471559108160, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:45,179 INFO [10.22.9.171,63280,1471559038246-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStates(1106): Transition {bf3c6d412d1b40a1b33f3f2c30bb496a state=OPEN, ts=1471559108160, server=10.22.9.171,63282,1471559038490} to {bf3c6d412d1b40a1b33f3f2c30bb496a state=PENDING_CLOSE, ts=1471559145179, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:45,179 INFO [10.22.9.171,63280,1471559038246-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. with state=PENDING_CLOSE 2016-08-18 15:25:45,180 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:45,181 INFO [PriorityRpcServer.handler=0,queue=0,port=63282] regionserver.RSRpcServices(1314): Close bf3c6d412d1b40a1b33f3f2c30bb496a, moving to null 2016-08-18 15:25:45,181 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] handler.CloseRegionHandler(90): Processing close of ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:45,181 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1419): Closing ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a.: disabling compactions & flushes 2016-08-18 15:25:45,181 DEBUG [10.22.9.171,63280,1471559038246-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(930): Sent CLOSE to 10.22.9.171,63282,1471559038490 for region ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:45,182 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1446): Updates disabled for region ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:45,183 INFO [StoreCloserThread-ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a.-1] regionserver.HStore(839): Closed f 2016-08-18 15:25:45,183 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559098969 2016-08-18 15:25:45,188 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/recovered.edits/6.seqid to file, newSeqId=6, maxSeqId=2 2016-08-18 15:25:45,189 INFO [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1552): Closed ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:45,189 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] master.AssignmentManager(2884): Got transition CLOSED for {bf3c6d412d1b40a1b33f3f2c30bb496a state=PENDING_CLOSE, ts=1471559145179, server=10.22.9.171,63282,1471559038490} from 10.22.9.171,63282,1471559038490 2016-08-18 15:25:45,190 INFO [B.defaultRpcServer.handler=1,queue=0,port=63280] master.RegionStates(1106): Transition {bf3c6d412d1b40a1b33f3f2c30bb496a state=PENDING_CLOSE, ts=1471559145179, server=10.22.9.171,63282,1471559038490} to {bf3c6d412d1b40a1b33f3f2c30bb496a state=OFFLINE, ts=1471559145190, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:45,190 INFO [B.defaultRpcServer.handler=1,queue=0,port=63280] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. with state=OFFLINE 2016-08-18 15:25:45,191 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:45,191 INFO [B.defaultRpcServer.handler=1,queue=0,port=63280] master.RegionStates(590): Offlined bf3c6d412d1b40a1b33f3f2c30bb496a from 10.22.9.171,63282,1471559038490 2016-08-18 15:25:45,192 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] handler.CloseRegionHandler(122): Closed ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:45,337 DEBUG [ProcedureExecutor-4] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471559145337,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:table2_restore"} 2016-08-18 15:25:45,339 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:45,340 INFO [ProcedureExecutor-4] hbase.MetaTableAccessor(1700): Updated table ns2:table2_restore state to DISABLED in META 2016-08-18 15:25:45,340 INFO [ProcedureExecutor-4] procedure.DisableTableProcedure(424): Disabled table, ns2:table2_restore, is completed. 2016-08-18 15:25:45,471 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-18 15:25:45,553 DEBUG [ProcedureExecutor-4] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns2:table2_restore/write-master:632800000000001 2016-08-18 15:25:45,553 DEBUG [ProcedureExecutor-4] procedure2.ProcedureExecutor(870): Procedure completed in 701msec: DisableTableProcedure (table=ns2:table2_restore) id=21 owner=tyu state=FINISHED 2016-08-18 15:25:45,973 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=21 2016-08-18 15:25:45,974 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: DISABLE, Table Name: ns2:table2_restore completed 2016-08-18 15:25:45,974 INFO [main] client.HBaseAdmin$8(615): Started truncating ns2:table2_restore 2016-08-18 15:25:45,975 INFO [B.defaultRpcServer.handler=4,queue=0,port=63280] master.HMaster(1848): Client=tyu//10.22.9.171 truncate ns2:table2_restore 2016-08-18 15:25:46,081 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63280] procedure2.ProcedureExecutor(669): Procedure TruncateTableProcedure (table=ns2:table2_restore preserveSplits=true) id=22 owner=tyu state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 15:25:46,085 DEBUG [ProcedureExecutor-5] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns2:table2_restore/write-master:632800000000002 2016-08-18 15:25:46,086 DEBUG [ProcedureExecutor-5] procedure.TruncateTableProcedure(87): waiting for 'ns2:table2_restore' regions in transition 2016-08-18 15:25:46,192 DEBUG [ProcedureExecutor-5] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"info":[{"timestamp":1471559146192,"tag":[],"qualifier":"","vlen":0}]},"row":"ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a."} 2016-08-18 15:25:46,194 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:46,195 INFO [ProcedureExecutor-5] hbase.MetaTableAccessor(1854): Deleted [{ENCODED => bf3c6d412d1b40a1b33f3f2c30bb496a, NAME => 'ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a.', STARTKEY => '', ENDKEY => ''}] 2016-08-18 15:25:46,196 DEBUG [ProcedureExecutor-5] procedure.DeleteTableProcedure(408): Removing 'ns2:table2_restore' from region states. 2016-08-18 15:25:46,197 DEBUG [ProcedureExecutor-5] procedure.DeleteTableProcedure(412): Marking 'ns2:table2_restore' as deleted. 2016-08-18 15:25:46,197 DEBUG [ProcedureExecutor-5] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"table":[{"timestamp":1471559146197,"tag":[],"qualifier":"state","vlen":0}]},"row":"ns2:table2_restore"} 2016-08-18 15:25:46,198 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:46,199 INFO [ProcedureExecutor-5] hbase.MetaTableAccessor(1726): Deleted table ns2:table2_restore state from META 2016-08-18 15:25:46,308 DEBUG [ProcedureExecutor-5] procedure.DeleteTableProcedure(340): Archiving region ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. from FS 2016-08-18 15:25:46,308 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(93): ARCHIVING hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a 2016-08-18 15:25:46,311 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(134): Archiving [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/f, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/recovered.edits] 2016-08-18 15:25:46,318 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/f/65825414117048bfb844bd376703334b_SeqId_4_, to hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/archive/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/f/65825414117048bfb844bd376703334b_SeqId_4_ 2016-08-18 15:25:46,323 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/recovered.edits/6.seqid, to hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/archive/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/recovered.edits/6.seqid 2016-08-18 15:25:46,323 INFO [IPC Server handler 3 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741916_1092 127.0.0.1:63273 2016-08-18 15:25:46,324 DEBUG [ProcedureExecutor-5] backup.HFileArchiver(453): Deleted all region files in: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a 2016-08-18 15:25:46,324 DEBUG [ProcedureExecutor-5] procedure.DeleteTableProcedure(344): Table 'ns2:table2_restore' archived! 2016-08-18 15:25:46,325 INFO [IPC Server handler 7 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741915_1091 127.0.0.1:63273 2016-08-18 15:25:46,444 INFO [IPC Server handler 4 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741949_1125{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:46,446 DEBUG [ProcedureExecutor-5] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns2/table2_restore/.tabledesc/.tableinfo.0000000001 2016-08-18 15:25:46,447 INFO [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(6162): creating HRegion ns2:table2_restore HTD == 'ns2:table2_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp Table name == ns2:table2_restore 2016-08-18 15:25:46,455 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741950_1126{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:25:46,455 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(736): Instantiated ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:46,456 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1419): Closing ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a.: disabling compactions & flushes 2016-08-18 15:25:46,456 DEBUG [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1446): Updates disabled for region ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:46,456 INFO [RegionOpenAndInitThread-ns2:table2_restore-1] regionserver.HRegion(1552): Closed ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:46,565 DEBUG [ProcedureExecutor-5] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a."} 2016-08-18 15:25:46,566 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:46,567 INFO [ProcedureExecutor-5] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 15:25:46,672 INFO [ProcedureExecutor-5] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,63282,1471559038490 2016-08-18 15:25:46,673 ERROR [ProcedureExecutor-5] master.TableStateManager(134): Unable to get table ns2:table2_restore state org.apache.hadoop.hbase.TableNotFoundException: ns2:table2_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:122) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:47) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 15:25:46,673 INFO [ProcedureExecutor-5] master.RegionStates(1106): Transition {bf3c6d412d1b40a1b33f3f2c30bb496a state=OFFLINE, ts=1471559146672, server=null} to {bf3c6d412d1b40a1b33f3f2c30bb496a state=PENDING_OPEN, ts=1471559146673, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:46,674 INFO [ProcedureExecutor-5] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. with state=PENDING_OPEN, sn=10.22.9.171,63282,1471559038490 2016-08-18 15:25:46,674 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:46,676 INFO [PriorityRpcServer.handler=2,queue=0,port=63282] regionserver.RSRpcServices(1666): Open ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:46,681 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-1] regionserver.HRegion(6339): Opening region: {ENCODED => bf3c6d412d1b40a1b33f3f2c30bb496a, NAME => 'ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a.', STARTKEY => '', ENDKEY => ''} 2016-08-18 15:25:46,681 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-1] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table2_restore bf3c6d412d1b40a1b33f3f2c30bb496a 2016-08-18 15:25:46,682 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-1] regionserver.HRegion(736): Instantiated ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:46,684 INFO [StoreOpener-bf3c6d412d1b40a1b33f3f2c30bb496a-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1087976, freeSize=1042874328, maxSize=1043962304, heapSize=1087976, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 15:25:46,685 INFO [StoreOpener-bf3c6d412d1b40a1b33f3f2c30bb496a-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 15:25:46,686 DEBUG [StoreOpener-bf3c6d412d1b40a1b33f3f2c30bb496a-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/f 2016-08-18 15:25:46,687 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-1] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a 2016-08-18 15:25:46,692 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 15:25:46,692 INFO [RS_OPEN_REGION-10.22.9.171:63282-1] regionserver.HRegion(871): Onlined bf3c6d412d1b40a1b33f3f2c30bb496a; next sequenceid=2 2016-08-18 15:25:46,692 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559098969 2016-08-18 15:25:46,693 INFO [PostOpenDeployTasks:bf3c6d412d1b40a1b33f3f2c30bb496a] regionserver.HRegionServer(1952): Post open deploy tasks for ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:46,693 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] master.AssignmentManager(2884): Got transition OPENED for {bf3c6d412d1b40a1b33f3f2c30bb496a state=PENDING_OPEN, ts=1471559146673, server=10.22.9.171,63282,1471559038490} from 10.22.9.171,63282,1471559038490 2016-08-18 15:25:46,694 INFO [B.defaultRpcServer.handler=2,queue=0,port=63280] master.RegionStates(1106): Transition {bf3c6d412d1b40a1b33f3f2c30bb496a state=PENDING_OPEN, ts=1471559146673, server=10.22.9.171,63282,1471559038490} to {bf3c6d412d1b40a1b33f3f2c30bb496a state=OPEN, ts=1471559146694, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:25:46,694 INFO [B.defaultRpcServer.handler=2,queue=0,port=63280] master.RegionStateStore(207): Updating hbase:meta row ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. with state=OPEN, openSeqNum=2, server=10.22.9.171,63282,1471559038490 2016-08-18 15:25:46,694 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:46,695 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] master.RegionStates(452): Onlined bf3c6d412d1b40a1b33f3f2c30bb496a on 10.22.9.171,63282,1471559038490 2016-08-18 15:25:46,695 DEBUG [ProcedureExecutor-5] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,63282,1471559038490 2016-08-18 15:25:46,695 DEBUG [ProcedureExecutor-5] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471559146695,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns2:table2_restore"} 2016-08-18 15:25:46,695 ERROR [B.defaultRpcServer.handler=2,queue=0,port=63280] master.TableStateManager(134): Unable to get table ns2:table2_restore state org.apache.hadoop.hbase.TableNotFoundException: ns2:table2_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 15:25:46,696 DEBUG [PostOpenDeployTasks:bf3c6d412d1b40a1b33f3f2c30bb496a] regionserver.HRegionServer(1979): Finished post open deploy task for ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:46,696 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:25:46,697 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-1] handler.OpenRegionHandler(126): Opened ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. on 10.22.9.171,63282,1471559038490 2016-08-18 15:25:46,697 INFO [ProcedureExecutor-5] hbase.MetaTableAccessor(1700): Updated table ns2:table2_restore state to ENABLED in META 2016-08-18 15:25:46,805 DEBUG [ProcedureExecutor-5] procedure.TruncateTableProcedure(129): truncate 'ns2:table2_restore' completed 2016-08-18 15:25:46,914 DEBUG [ProcedureExecutor-5] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns2:table2_restore/write-master:632800000000002 2016-08-18 15:25:46,914 DEBUG [ProcedureExecutor-5] procedure2.ProcedureExecutor(870): Procedure completed in 830msec: TruncateTableProcedure (table=ns2:table2_restore preserveSplits=true) id=22 owner=tyu state=FINISHED 2016-08-18 15:25:47,091 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=22 2016-08-18 15:25:47,092 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: TRUNCATE, Table Name: ns2:table2_restore completed 2016-08-18 15:25:47,092 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:25:47,092 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310024 2016-08-18 15:25:47,095 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:25:47,097 DEBUG [main] util.RestoreServerUtil(255): cluster hold the backup image: hdfs://localhost:63272; local cluster node: hdfs://localhost:63272 2016-08-18 15:25:47,097 DEBUG [main] util.RestoreServerUtil(261): File hdfs://localhost:63272/backupUT/backup_1471559069358/ns2/test-14715590609531/archive/data/ns2/test-14715590609531 on local cluster, back it up before restore 2016-08-18 15:25:47,097 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63687 because read count=-1. Number of active connections: 11 2016-08-18 15:25:47,097 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63686 because read count=-1. Number of active connections: 11 2016-08-18 15:25:47,097 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel$8(566): IPC Client (-1359682606) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:47,097 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (-1119680480) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:47,114 INFO [IPC Server handler 7 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741951_1127{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:25:47,114 DEBUG [main] util.RestoreServerUtil(271): Copied to temporary path on local cluster: /user/tyu/hbase-staging/restore 2016-08-18 15:25:47,115 DEBUG [main] util.RestoreServerUtil(355): TableArchivePath for bulkload using tempPath: /user/tyu/hbase-staging/restore 2016-08-18 15:25:47,182 DEBUG [main] util.RestoreServerUtil(363): Restoring HFiles from directory hdfs://localhost:63272/user/tyu/hbase-staging/restore/eafb138c6dd37e9e90df990bbe563d21 2016-08-18 15:25:47,183 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x691075b0 connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:47,188 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x691075b00x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:47,189 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1d1139a2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:47,189 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:47,189 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:47,190 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x691075b0-0x1569fc0e7310025 connected 2016-08-18 15:25:47,191 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:47,191 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63692; # active connections: 10 2016-08-18 15:25:47,192 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:47,192 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63692 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:47,199 DEBUG [main] client.ConnectionImplementation(604): Table ns2:table2_restore should be available 2016-08-18 15:25:47,204 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:25:47,204 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63693; # active connections: 11 2016-08-18 15:25:47,205 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:47,205 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63693 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:47,210 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1087976, freeSize=1042874328, maxSize=1043962304, heapSize=1087976, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 15:25:47,213 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:63272/user/tyu/hbase-staging/restore/eafb138c6dd37e9e90df990bbe563d21/f/6da6c850be474a888309ae7f6e7279f0 first=row0 last=row98 2016-08-18 15:25:47,217 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a., hostname=10.22.9.171,63282,1471559038490, seqNum=2 for row with hfile group [{[B@3ab912fb,hdfs://localhost:63272/user/tyu/hbase-staging/restore/eafb138c6dd37e9e90df990bbe563d21/f/6da6c850be474a888309ae7f6e7279f0}] 2016-08-18 15:25:47,218 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:47,218 DEBUG [RpcServer.listener,port=63282] ipc.RpcServer$Listener(880): RpcServer.listener,port=63282: connection from 10.22.9.171:63694; # active connections: 7 2016-08-18 15:25:47,219 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:47,219 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63694 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:47,219 INFO [B.defaultRpcServer.handler=3,queue=0,port=63282] regionserver.HStore(670): Validating hfile at hdfs://localhost:63272/user/tyu/hbase-staging/restore/eafb138c6dd37e9e90df990bbe563d21/f/6da6c850be474a888309ae7f6e7279f0 for inclusion in store f region ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:25:47,222 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63282] regionserver.HStore(682): HFile bounds: first=row0 last=row98 2016-08-18 15:25:47,222 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63282] regionserver.HStore(684): Region bounds: first= last= 2016-08-18 15:25:47,224 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63282] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:63272/user/tyu/hbase-staging/restore/eafb138c6dd37e9e90df990bbe563d21/f/6da6c850be474a888309ae7f6e7279f0 as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/f/9168db725aab461eb39d52ed48d82aec_SeqId_4_ 2016-08-18 15:25:47,225 INFO [B.defaultRpcServer.handler=3,queue=0,port=63282] regionserver.HStore(742): Loaded HFile hdfs://localhost:63272/user/tyu/hbase-staging/restore/eafb138c6dd37e9e90df990bbe563d21/f/6da6c850be474a888309ae7f6e7279f0 into store 'f' as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/f/9168db725aab461eb39d52ed48d82aec_SeqId_4_ - updating store file list. 2016-08-18 15:25:47,230 INFO [B.defaultRpcServer.handler=3,queue=0,port=63282] regionserver.HStore(777): Loaded HFile hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/f/9168db725aab461eb39d52ed48d82aec_SeqId_4_ into store 'f 2016-08-18 15:25:47,231 INFO [B.defaultRpcServer.handler=3,queue=0,port=63282] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:63272/user/tyu/hbase-staging/restore/eafb138c6dd37e9e90df990bbe563d21/f/6da6c850be474a888309ae7f6e7279f0 into store f (new location: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/f/9168db725aab461eb39d52ed48d82aec_SeqId_4_) 2016-08-18 15:25:47,231 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559098969 2016-08-18 15:25:47,232 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:25:47,232 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310025 2016-08-18 15:25:47,233 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:25:47,234 INFO [main] impl.RestoreClientImpl(284): Restoring 'ns2:test-14715590609531' to 'ns2:table2_restore' from log dirs: hdfs://localhost:63272/backupUT/backup_1471559097949/WALs 2016-08-18 15:25:47,234 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63693 because read count=-1. Number of active connections: 11 2016-08-18 15:25:47,234 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel$8(566): IPC Client (2014895916) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:47,234 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel$8(566): IPC Client (1390416379) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:47,234 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (-2118093229) to /10.22.9.171:63282 from tyu: closed 2016-08-18 15:25:47,234 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Listener(912): RpcServer.listener,port=63282: DISCONNECTING client 10.22.9.171:63694 because read count=-1. Number of active connections: 7 2016-08-18 15:25:47,234 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63692 because read count=-1. Number of active connections: 11 2016-08-18 15:25:47,234 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x73297fba connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:47,237 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x73297fba0x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:47,237 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7e4678bb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:47,237 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:47,238 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:47,238 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x73297fba-0x1569fc0e7310026 connected 2016-08-18 15:25:47,239 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:47,239 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63696; # active connections: 10 2016-08-18 15:25:47,240 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:47,240 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63696 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:47,241 INFO [main] mapreduce.MapReduceRestoreService(75): Restore incremental backup from directory hdfs://localhost:63272/backupUT/backup_1471559097949/WALs from hbase tables ,ns2:test-14715590609531 to tables ,ns2:table2_restore 2016-08-18 15:25:47,241 INFO [main] mapreduce.MapReduceRestoreService(80): Restore ns2:test-14715590609531 into ns2:table2_restore 2016-08-18 15:25:47,243 DEBUG [main] mapreduce.WALPlayer(307): add incremental job :/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471559147241 from hdfs://localhost:63272/backupUT/backup_1471559097949/WALs to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471559147241 2016-08-18 15:25:47,243 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2d05fbe6 connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:47,245 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x2d05fbe60x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:47,245 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@16299d8c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:47,246 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:47,246 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:47,246 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x2d05fbe6-0x1569fc0e7310027 connected 2016-08-18 15:25:47,247 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:25:47,247 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63698; # active connections: 11 2016-08-18 15:25:47,248 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:47,248 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63698 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:47,249 INFO [main] mapreduce.HFileOutputFormat2(478): bulkload locality sensitive enabled 2016-08-18 15:25:47,249 INFO [main] mapreduce.HFileOutputFormat2(483): Looking up current regions for table ns2:test-14715590609531 2016-08-18 15:25:47,252 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:47,252 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63699; # active connections: 12 2016-08-18 15:25:47,253 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:47,253 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63699 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:47,256 INFO [main] mapreduce.HFileOutputFormat2(485): Configuring 1 reduce partitions to match current region count 2016-08-18 15:25:47,256 INFO [main] mapreduce.HFileOutputFormat2(378): Writing partition information to /user/tyu/hbase-staging/partitions_f19479ff-850d-41fc-ba58-4edd473c4632 2016-08-18 15:25:47,262 INFO [IPC Server handler 7 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741952_1128{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 153 2016-08-18 15:25:47,569 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2fc1fd9c] blockmanagement.BlockManager(3482): BLOCK* BlockManager: ask 127.0.0.1:63273 to delete [blk_1073741915_1091, blk_1073741916_1092] 2016-08-18 15:25:47,670 WARN [main] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-18 15:25:48,427 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-8941670902581618549.jar 2016-08-18 15:25:49,711 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0001_000001 (auth:SIMPLE) 2016-08-18 15:25:50,694 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-18 15:25:57,647 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-1290991817105460848.jar 2016-08-18 15:25:58,860 DEBUG [10.22.9.171,63282,1471559038490_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 15:25:58,971 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 15:25:58,972 INFO [10.22.9.171,63280,1471559038246_ChoreService_1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x774996b8 connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:25:58,974 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x774996b80x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:25:58,975 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@cdc79c3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:25:58,975 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:25:58,975 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:25:58,975 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(580): Has backup sessions from hbase:backup 2016-08-18 15:25:58,976 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x774996b8-0x1569fc0e7310028 connected 2016-08-18 15:25:58,978 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:58,978 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63709; # active connections: 13 2016-08-18 15:25:58,979 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:58,980 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63709 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:58,983 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:25:58,983 DEBUG [RpcServer.listener,port=63282] ipc.RpcServer$Listener(880): RpcServer.listener,port=63282: connection from 10.22.9.171:63710; # active connections: 7 2016-08-18 15:25:58,983 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:25:58,984 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63710 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:25:58,986 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559040101 2016-08-18 15:25:58,987 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559040101 2016-08-18 15:25:58,987 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590 2016-08-18 15:25:58,988 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590 2016-08-18 15:25:58,988 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577 2016-08-18 15:25:58,989 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577 2016-08-18 15:25:58,989 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559040101 2016-08-18 15:25:58,990 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559040101 2016-08-18 15:25:58,990 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994 2016-08-18 15:25:58,991 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994 2016-08-18 15:25:58,991 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559064398 2016-08-18 15:25:58,992 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559064398 2016-08-18 15:25:58,992 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559065843 2016-08-18 15:25:58,993 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559065843 2016-08-18 15:25:58,994 INFO [10.22.9.171,63280,1471559038246_ChoreService_1] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310028 2016-08-18 15:25:58,994 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:25:58,995 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Listener(912): RpcServer.listener,port=63282: DISCONNECTING client 10.22.9.171:63710 because read count=-1. Number of active connections: 7 2016-08-18 15:25:58,995 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63709 because read count=-1. Number of active connections: 13 2016-08-18 15:25:58,995 DEBUG [AsyncRpcChannel-pool2-t11] ipc.AsyncRpcChannel$8(566): IPC Client (107557165) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:25:58,995 DEBUG [AsyncRpcChannel-pool2-t12] ipc.AsyncRpcChannel$8(566): IPC Client (-407556674) to /10.22.9.171:63282 from tyu: closed 2016-08-18 15:25:59,295 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-6966760505054353020.jar 2016-08-18 15:25:59,346 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-1273359183821997739.jar 2016-08-18 15:26:02,313 DEBUG [10.22.9.171,63314,1471559042157_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 15:26:02,390 DEBUG [10.22.9.171,63319,1471559042214_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 15:26:02,651 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/meta/1588230740/info 2016-08-18 15:26:02,651 DEBUG [region-location-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/backup/850b903ca0af513aa15775825a9a082c/meta 2016-08-18 15:26:02,651 DEBUG [region-location-2] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/namespace/a3b1a9605e4887d65b7f50b16f400740/info 2016-08-18 15:26:02,651 DEBUG [region-location-0] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/meta/1588230740/table 2016-08-18 15:26:02,652 DEBUG [region-location-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/backup/850b903ca0af513aa15775825a9a082c/session 2016-08-18 15:26:06,291 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-5845824162238535506.jar 2016-08-18 15:26:06,291 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-18 15:26:06,292 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-18 15:26:06,292 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-18 15:26:06,292 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 15:26:06,293 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-18 15:26:06,293 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-18 15:26:06,498 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-735150447438790510.jar 2016-08-18 15:26:06,498 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-735150447438790510.jar 2016-08-18 15:26:07,680 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.WALInputFormat, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-1294676796188106849.jar 2016-08-18 15:26:07,681 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-735150447438790510.jar 2016-08-18 15:26:07,681 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-735150447438790510.jar 2016-08-18 15:26:07,681 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-1294676796188106849.jar 2016-08-18 15:26:07,682 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.2/hadoop-mapreduce-client-core-2.7.2.jar 2016-08-18 15:26:07,682 INFO [main] mapreduce.HFileOutputFormat2(498): Incremental table ns2:test-14715590609531 output configured. 2016-08-18 15:26:07,682 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:26:07,682 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310027 2016-08-18 15:26:07,683 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:26:07,684 DEBUG [main] mapreduce.WALPlayer(325): success configuring load incremental job 2016-08-18 15:26:07,684 DEBUG [AsyncRpcChannel-pool2-t9] ipc.AsyncRpcChannel$8(566): IPC Client (-1997824017) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:26:07,684 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63699 because read count=-1. Number of active connections: 12 2016-08-18 15:26:07,684 DEBUG [AsyncRpcChannel-pool2-t10] ipc.AsyncRpcChannel$8(566): IPC Client (-1762667027) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:26:07,684 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63698 because read count=-1. Number of active connections: 12 2016-08-18 15:26:07,684 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.base.Preconditions, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 15:26:08,007 WARN [main] mapreduce.JobResourceUploader(64): Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-08-18 15:26:08,032 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741953_1129{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:08,041 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741954_1130{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:26:08,049 INFO [IPC Server handler 7 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741955_1131{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:26:08,054 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741956_1132{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:08,071 INFO [IPC Server handler 5 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741957_1133{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:26:08,080 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741958_1134{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:08,087 INFO [IPC Server handler 4 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741959_1135{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 112558 2016-08-18 15:26:08,507 INFO [IPC Server handler 3 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741960_1136{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:26:08,518 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741961_1137{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:08,527 INFO [IPC Server handler 9 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741962_1138{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:08,539 INFO [IPC Server handler 1 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741963_1139{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:26:08,555 INFO [IPC Server handler 4 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741964_1140{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:08,564 INFO [IPC Server handler 3 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741965_1141{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:08,574 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741966_1142{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:26:08,575 WARN [main] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-18 15:26:08,587 DEBUG [main] mapreduce.WALInputFormat(265): Scanning hdfs://localhost:63272/backupUT/backup_1471559097949/WALs for WAL files 2016-08-18 15:26:08,590 WARN [main] mapreduce.WALInputFormat(289): File hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/.backup.manifest does not appear to be an WAL file. Skipping... 2016-08-18 15:26:08,591 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471559100571; access_time=1471559100563; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:26:08,591 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559040116; isDirectory=false; length=981; replication=1; blocksize=134217728; modification_time=1471559100104; access_time=1471559100092; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:26:08,591 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471559100589; access_time=1471559100580; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:26:08,591 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471559100612; access_time=1471559100599; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:26:08,591 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559042158; isDirectory=false; length=1629; replication=1; blocksize=134217728; modification_time=1471559100532; access_time=1471559100120; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:26:08,591 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559064398; isDirectory=false; length=10957; replication=1; blocksize=134217728; modification_time=1471559100630; access_time=1471559100622; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:26:08,592 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577; isDirectory=false; length=11592; replication=1; blocksize=134217728; modification_time=1471559100553; access_time=1471559100545; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:26:08,592 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559065843; isDirectory=false; length=11059; replication=1; blocksize=134217728; modification_time=1471559100646; access_time=1471559100638; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:26:08,600 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741967_1143{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 1647 2016-08-18 15:26:09,014 INFO [IPC Server handler 7 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741968_1144{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:09,045 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741969_1145{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:09,083 WARN [ResourceManager Event Processor] capacity.LeafQueue(610): maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-18 15:26:09,083 WARN [ResourceManager Event Processor] capacity.LeafQueue(631): maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-18 15:26:09,728 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:14,522 INFO [Socket Reader #1 for port 63350] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:14,770 INFO [IPC Server handler 9 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741970_1146{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:26:16,739 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:16,739 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:17,603 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:17,603 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:18,612 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:19,620 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:20,903 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:20,924 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0002_01_000002 is : 143 2016-08-18 15:26:22,645 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:22,718 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:22,745 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0002_01_000004 is : 143 2016-08-18 15:26:22,813 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:22,834 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0002_01_000003 is : 143 2016-08-18 15:26:22,909 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:22,931 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0002_01_000005 is : 143 2016-08-18 15:26:23,648 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:23,832 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:23,850 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0002_01_000006 is : 143 2016-08-18 15:26:24,347 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:24,365 WARN [ContainersLauncher #3] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0002_01_000007 is : 143 2016-08-18 15:26:25,673 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:25,739 WARN [AsyncDispatcher event handler] containermanager.ContainerManagerImpl$ContainerEventDispatcher(1080): Event EventType: KILL_CONTAINER sent to absent container container_1471559057429_0002_01_000010 2016-08-18 15:26:25,967 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:25,985 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0002_01_000008 is : 143 2016-08-18 15:26:26,675 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:26,690 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0002_01_000009 is : 143 2016-08-18 15:26:29,891 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63805; # active connections: 11 2016-08-18 15:26:30,256 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:26:30,257 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63805 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:26:30,470 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63805 because read count=-1. Number of active connections: 11 2016-08-18 15:26:31,083 INFO [IPC Server handler 1 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741972_1148{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:26:31,124 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:31,139 WARN [ContainersLauncher #3] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0002_01_000011 is : 143 2016-08-18 15:26:31,169 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741971_1147{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 16349 2016-08-18 15:26:31,178 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741973_1149{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:26:31,206 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741974_1150{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:26:31,223 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741975_1151{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:26:32,242 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741967_1143 127.0.0.1:63273 2016-08-18 15:26:32,242 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741968_1144 127.0.0.1:63273 2016-08-18 15:26:32,242 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741969_1145 127.0.0.1:63273 2016-08-18 15:26:32,243 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741971_1147 127.0.0.1:63273 2016-08-18 15:26:32,243 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741970_1146 127.0.0.1:63273 2016-08-18 15:26:32,243 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741966_1142 127.0.0.1:63273 2016-08-18 15:26:32,243 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741956_1132 127.0.0.1:63273 2016-08-18 15:26:32,243 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741953_1129 127.0.0.1:63273 2016-08-18 15:26:32,243 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741964_1140 127.0.0.1:63273 2016-08-18 15:26:32,243 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741957_1133 127.0.0.1:63273 2016-08-18 15:26:32,243 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741958_1134 127.0.0.1:63273 2016-08-18 15:26:32,243 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741955_1131 127.0.0.1:63273 2016-08-18 15:26:32,244 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741962_1138 127.0.0.1:63273 2016-08-18 15:26:32,244 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741960_1136 127.0.0.1:63273 2016-08-18 15:26:32,244 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741961_1137 127.0.0.1:63273 2016-08-18 15:26:32,244 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741959_1135 127.0.0.1:63273 2016-08-18 15:26:32,244 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741963_1139 127.0.0.1:63273 2016-08-18 15:26:32,244 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741954_1130 127.0.0.1:63273 2016-08-18 15:26:32,244 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741965_1141 127.0.0.1:63273 2016-08-18 15:26:32,336 DEBUG [main] mapreduce.MapReduceRestoreService(101): Restoring HFiles from directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471559147241 2016-08-18 15:26:32,337 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x71fb76f connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:26:32,341 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x71fb76f0x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:26:32,343 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@15043812, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:26:32,343 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:26:32,343 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:26:32,344 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x71fb76f-0x1569fc0e731002a connected 2016-08-18 15:26:32,345 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:26:32,345 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63815; # active connections: 11 2016-08-18 15:26:32,346 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:26:32,346 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63815 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:26:32,360 DEBUG [main] client.ConnectionImplementation(604): Table ns2:table2_restore should be available 2016-08-18 15:26:32,362 WARN [main] mapreduce.LoadIncrementalHFiles(199): Skipping non-directory hdfs://localhost:63272/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471559147241/_SUCCESS 2016-08-18 15:26:32,368 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:26:32,368 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63817; # active connections: 12 2016-08-18 15:26:32,369 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:26:32,369 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63817 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:26:32,374 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1087976, freeSize=1042874328, maxSize=1043962304, heapSize=1087976, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 15:26:32,377 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles(697): Trying to load hfile=hdfs://localhost:63272/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471559147241/f/bc9fe497745d4f09aee73b42f3e4fe60 first=row0 last=row98 2016-08-18 15:26:32,381 DEBUG [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles$4(788): Going to connect to server region=ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a., hostname=10.22.9.171,63282,1471559038490, seqNum=2 for row with hfile group [{[B@637ded48,hdfs://localhost:63272/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471559147241/f/bc9fe497745d4f09aee73b42f3e4fe60}] 2016-08-18 15:26:32,382 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:26:32,382 DEBUG [RpcServer.listener,port=63282] ipc.RpcServer$Listener(880): RpcServer.listener,port=63282: connection from 10.22.9.171:63818; # active connections: 7 2016-08-18 15:26:32,383 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:26:32,383 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63818 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:26:32,383 INFO [B.defaultRpcServer.handler=0,queue=0,port=63282] regionserver.HStore(670): Validating hfile at hdfs://localhost:63272/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471559147241/f/bc9fe497745d4f09aee73b42f3e4fe60 for inclusion in store f region ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:26:32,390 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63282] regionserver.HStore(682): HFile bounds: first=row0 last=row98 2016-08-18 15:26:32,390 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63282] regionserver.HStore(684): Region bounds: first= last= 2016-08-18 15:26:32,391 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63282] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:63272/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471559147241/f/bc9fe497745d4f09aee73b42f3e4fe60 as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/f/cb1600ed23324be092bbe04498db9036_SeqId_6_ 2016-08-18 15:26:32,392 INFO [B.defaultRpcServer.handler=0,queue=0,port=63282] regionserver.HStore(742): Loaded HFile hdfs://localhost:63272/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471559147241/f/bc9fe497745d4f09aee73b42f3e4fe60 into store 'f' as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/f/cb1600ed23324be092bbe04498db9036_SeqId_6_ - updating store file list. 2016-08-18 15:26:32,398 INFO [B.defaultRpcServer.handler=0,queue=0,port=63282] regionserver.HStore(777): Loaded HFile hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/f/cb1600ed23324be092bbe04498db9036_SeqId_6_ into store 'f 2016-08-18 15:26:32,398 INFO [B.defaultRpcServer.handler=0,queue=0,port=63282] regionserver.HStore(748): Successfully loaded store file hdfs://localhost:63272/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns2-table2_restore-1471559147241/f/bc9fe497745d4f09aee73b42f3e4fe60 into store f (new location: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/f/cb1600ed23324be092bbe04498db9036_SeqId_6_) 2016-08-18 15:26:32,398 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559098969 2016-08-18 15:26:32,399 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:26:32,400 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e731002a 2016-08-18 15:26:32,401 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:26:32,401 DEBUG [main] mapreduce.MapReduceRestoreService(113): Restore Job finished:0 2016-08-18 15:26:32,401 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Listener(912): RpcServer.listener,port=63282: DISCONNECTING client 10.22.9.171:63818 because read count=-1. Number of active connections: 7 2016-08-18 15:26:32,401 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310026 2016-08-18 15:26:32,401 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63817 because read count=-1. Number of active connections: 12 2016-08-18 15:26:32,401 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63815 because read count=-1. Number of active connections: 12 2016-08-18 15:26:32,401 DEBUG [AsyncRpcChannel-pool2-t13] ipc.AsyncRpcChannel$8(566): IPC Client (-20050987) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:26:32,401 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel$8(566): IPC Client (895973936) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:26:32,401 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel$8(566): IPC Client (-1404633693) to /10.22.9.171:63282 from tyu: closed 2016-08-18 15:26:32,402 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:26:32,403 INFO [main] impl.RestoreClientImpl(292): ns2:test-14715590609531 has been successfully restored to ns2:table2_restore 2016-08-18 15:26:32,403 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (-1564084136) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:26:32,403 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-18 15:26:32,403 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471559069358 hdfs://localhost:63272/backupUT/backup_1471559069358/ns2/test-14715590609531/ 2016-08-18 15:26:32,403 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471559097949 hdfs://localhost:63272/backupUT/backup_1471559097949/ns2/test-14715590609531/ 2016-08-18 15:26:32,403 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63696 because read count=-1. Number of active connections: 10 2016-08-18 15:26:32,403 DEBUG [main] impl.RestoreClientImpl(215): need to clear merged Image. to be implemented in future jira 2016-08-18 15:26:32,404 DEBUG [main] impl.BackupManifest(325): Loading manifest from: hdfs://localhost:63272/backupUT/backup_1471559069358/ns3/test-14715590609532/.backup.manifest 2016-08-18 15:26:32,407 DEBUG [main] impl.BackupManifest(409): load dependency for: backup_1471559069358 2016-08-18 15:26:32,407 DEBUG [main] impl.BackupManifest(376): Loaded manifest instance from manifest file: /backupUT/backup_1471559069358/ns3/test-14715590609532/.backup.manifest 2016-08-18 15:26:32,407 INFO [main] impl.RestoreClientImpl(266): Restoring 'ns3:test-14715590609532' to 'ns3:table3_restore' from full backup image hdfs://localhost:63272/backupUT/backup_1471559069358/ns3/test-14715590609532 2016-08-18 15:26:32,413 DEBUG [main] util.RestoreServerUtil(109): Folder tableArchivePath: hdfs://localhost:63272/backupUT/backup_1471559069358/ns3/test-14715590609532/archive/data/ns3/test-14715590609532 does not exists 2016-08-18 15:26:32,413 DEBUG [main] util.RestoreServerUtil(315): find table descriptor but no archive dir for table ns3:test-14715590609532, will only create table 2016-08-18 15:26:32,413 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x40bf7f63 connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:26:32,415 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x40bf7f630x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:26:32,416 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5487dcbd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:26:32,416 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:26:32,416 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:26:32,417 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x40bf7f63-0x1569fc0e731002b connected 2016-08-18 15:26:32,417 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:26:32,417 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63822; # active connections: 10 2016-08-18 15:26:32,418 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:26:32,418 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63822 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:26:32,419 INFO [main] util.RestoreServerUtil(585): Truncating exising target table 'ns3:table3_restore', preserving region splits 2016-08-18 15:26:32,420 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:26:32,420 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63823; # active connections: 11 2016-08-18 15:26:32,420 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:26:32,420 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63823 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:26:32,421 INFO [main] client.HBaseAdmin$10(780): Started disable of ns3:table3_restore 2016-08-18 15:26:32,421 INFO [B.defaultRpcServer.handler=4,queue=0,port=63280] master.HMaster(1986): Client=tyu//10.22.9.171 disable ns3:table3_restore 2016-08-18 15:26:32,528 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63280] procedure2.ProcedureExecutor(669): Procedure DisableTableProcedure (table=ns3:table3_restore) id=23 owner=tyu state=RUNNABLE:DISABLE_TABLE_PREPARE added to the store. 2016-08-18 15:26:32,531 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-18 15:26:32,531 DEBUG [ProcedureExecutor-6] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns3:table3_restore/write-master:632800000000001 2016-08-18 15:26:32,602 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2fc1fd9c] blockmanagement.BlockManager(3482): BLOCK* BlockManager: ask 127.0.0.1:63273 to delete [blk_1073741953_1129, blk_1073741954_1130, blk_1073741955_1131, blk_1073741956_1132, blk_1073741957_1133, blk_1073741958_1134, blk_1073741959_1135, blk_1073741960_1136, blk_1073741961_1137, blk_1073741962_1138, blk_1073741963_1139, blk_1073741964_1140, blk_1073741965_1141, blk_1073741966_1142, blk_1073741967_1143, blk_1073741968_1144, blk_1073741969_1145, blk_1073741970_1146, blk_1073741971_1147] 2016-08-18 15:26:32,636 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-18 15:26:32,742 DEBUG [ProcedureExecutor-6] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471559192742,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:table3_restore"} 2016-08-18 15:26:32,744 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:26:32,746 INFO [ProcedureExecutor-6] hbase.MetaTableAccessor(1700): Updated table ns3:table3_restore state to DISABLING in META 2016-08-18 15:26:32,843 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-18 15:26:32,850 INFO [ProcedureExecutor-6] procedure.DisableTableProcedure(395): Offlining 1 regions. 2016-08-18 15:26:32,851 DEBUG [10.22.9.171,63280,1471559038246-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(1352): Starting unassign of ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. (offlining), current state: {64e80db997c7530f46efbcdcb1606064 state=OPEN, ts=1471559110325, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:26:32,851 INFO [10.22.9.171,63280,1471559038246-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStates(1106): Transition {64e80db997c7530f46efbcdcb1606064 state=OPEN, ts=1471559110325, server=10.22.9.171,63282,1471559038490} to {64e80db997c7530f46efbcdcb1606064 state=PENDING_CLOSE, ts=1471559192851, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:26:32,851 INFO [10.22.9.171,63280,1471559038246-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. with state=PENDING_CLOSE 2016-08-18 15:26:32,852 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:26:32,853 INFO [PriorityRpcServer.handler=1,queue=1,port=63282] regionserver.RSRpcServices(1314): Close 64e80db997c7530f46efbcdcb1606064, moving to null 2016-08-18 15:26:32,854 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-2] handler.CloseRegionHandler(90): Processing close of ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:26:32,855 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.HRegion(1419): Closing ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064.: disabling compactions & flushes 2016-08-18 15:26:32,855 DEBUG [10.22.9.171,63280,1471559038246-org.apache.hadoop.hbase.master.procedure.DisableTableProcedure$BulkDisabler-0] master.AssignmentManager(930): Sent CLOSE to 10.22.9.171,63282,1471559038490 for region ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:26:32,855 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.HRegion(1446): Updates disabled for region ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:26:32,855 INFO [StoreCloserThread-ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064.-1] regionserver.HStore(839): Closed f 2016-08-18 15:26:32,856 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559098547 2016-08-18 15:26:32,861 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns3/table3_restore/64e80db997c7530f46efbcdcb1606064/recovered.edits/4.seqid to file, newSeqId=4, maxSeqId=2 2016-08-18 15:26:32,863 INFO [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.HRegion(1552): Closed ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:26:32,864 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=63280] master.AssignmentManager(2884): Got transition CLOSED for {64e80db997c7530f46efbcdcb1606064 state=PENDING_CLOSE, ts=1471559192851, server=10.22.9.171,63282,1471559038490} from 10.22.9.171,63282,1471559038490 2016-08-18 15:26:32,864 INFO [B.defaultRpcServer.handler=3,queue=0,port=63280] master.RegionStates(1106): Transition {64e80db997c7530f46efbcdcb1606064 state=PENDING_CLOSE, ts=1471559192851, server=10.22.9.171,63282,1471559038490} to {64e80db997c7530f46efbcdcb1606064 state=OFFLINE, ts=1471559192864, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:26:32,864 INFO [B.defaultRpcServer.handler=3,queue=0,port=63280] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. with state=OFFLINE 2016-08-18 15:26:32,865 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:26:32,866 INFO [B.defaultRpcServer.handler=3,queue=0,port=63280] master.RegionStates(590): Offlined 64e80db997c7530f46efbcdcb1606064 from 10.22.9.171,63282,1471559038490 2016-08-18 15:26:32,866 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-2] handler.CloseRegionHandler(122): Closed ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:26:33,011 DEBUG [ProcedureExecutor-6] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471559193011,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:table3_restore"} 2016-08-18 15:26:33,013 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:26:33,014 INFO [ProcedureExecutor-6] hbase.MetaTableAccessor(1700): Updated table ns3:table3_restore state to DISABLED in META 2016-08-18 15:26:33,014 INFO [ProcedureExecutor-6] procedure.DisableTableProcedure(424): Disabled table, ns3:table3_restore, is completed. 2016-08-18 15:26:33,148 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-18 15:26:33,229 DEBUG [ProcedureExecutor-6] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns3:table3_restore/write-master:632800000000001 2016-08-18 15:26:33,229 DEBUG [ProcedureExecutor-6] procedure2.ProcedureExecutor(870): Procedure completed in 700msec: DisableTableProcedure (table=ns3:table3_restore) id=23 owner=tyu state=FINISHED 2016-08-18 15:26:33,653 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=23 2016-08-18 15:26:33,653 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: DISABLE, Table Name: ns3:table3_restore completed 2016-08-18 15:26:33,654 INFO [main] client.HBaseAdmin$8(615): Started truncating ns3:table3_restore 2016-08-18 15:26:33,655 INFO [B.defaultRpcServer.handler=2,queue=0,port=63280] master.HMaster(1848): Client=tyu//10.22.9.171 truncate ns3:table3_restore 2016-08-18 15:26:33,761 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=63280] procedure2.ProcedureExecutor(669): Procedure TruncateTableProcedure (table=ns3:table3_restore preserveSplits=true) id=24 owner=tyu state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION added to the store. 2016-08-18 15:26:33,765 DEBUG [ProcedureExecutor-7] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/ns3:table3_restore/write-master:632800000000002 2016-08-18 15:26:33,766 DEBUG [ProcedureExecutor-7] procedure.TruncateTableProcedure(87): waiting for 'ns3:table3_restore' regions in transition 2016-08-18 15:26:33,873 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"info":[{"timestamp":1471559193872,"tag":[],"qualifier":"","vlen":0}]},"row":"ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064."} 2016-08-18 15:26:33,874 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:26:33,875 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1854): Deleted [{ENCODED => 64e80db997c7530f46efbcdcb1606064, NAME => 'ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064.', STARTKEY => '', ENDKEY => ''}] 2016-08-18 15:26:33,877 DEBUG [ProcedureExecutor-7] procedure.DeleteTableProcedure(408): Removing 'ns3:table3_restore' from region states. 2016-08-18 15:26:33,878 DEBUG [ProcedureExecutor-7] procedure.DeleteTableProcedure(412): Marking 'ns3:table3_restore' as deleted. 2016-08-18 15:26:33,878 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1406): Delete{"ts":9223372036854775807,"totalColumns":1,"families":{"table":[{"timestamp":1471559193878,"tag":[],"qualifier":"state","vlen":0}]},"row":"ns3:table3_restore"} 2016-08-18 15:26:33,879 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:26:33,880 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1726): Deleted table ns3:table3_restore state from META 2016-08-18 15:26:33,993 DEBUG [ProcedureExecutor-7] procedure.DeleteTableProcedure(340): Archiving region ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. from FS 2016-08-18 15:26:33,993 DEBUG [ProcedureExecutor-7] backup.HFileArchiver(93): ARCHIVING hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns3/table3_restore/64e80db997c7530f46efbcdcb1606064 2016-08-18 15:26:33,996 DEBUG [ProcedureExecutor-7] backup.HFileArchiver(134): Archiving [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns3/table3_restore/64e80db997c7530f46efbcdcb1606064/f, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns3/table3_restore/64e80db997c7530f46efbcdcb1606064/recovered.edits] 2016-08-18 15:26:34,004 DEBUG [ProcedureExecutor-7] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns3/table3_restore/64e80db997c7530f46efbcdcb1606064/recovered.edits/4.seqid, to hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/archive/data/ns3/table3_restore/64e80db997c7530f46efbcdcb1606064/recovered.edits/4.seqid 2016-08-18 15:26:34,004 INFO [IPC Server handler 9 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741919_1095 127.0.0.1:63273 2016-08-18 15:26:34,005 DEBUG [ProcedureExecutor-7] backup.HFileArchiver(453): Deleted all region files in: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns3/table3_restore/64e80db997c7530f46efbcdcb1606064 2016-08-18 15:26:34,006 DEBUG [ProcedureExecutor-7] procedure.DeleteTableProcedure(344): Table 'ns3:table3_restore' archived! 2016-08-18 15:26:34,007 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741918_1094 127.0.0.1:63273 2016-08-18 15:26:34,126 INFO [IPC Server handler 4 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741976_1152{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 291 2016-08-18 15:26:34,534 DEBUG [ProcedureExecutor-7] util.FSTableDescriptors(718): Wrote descriptor into: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp/data/ns3/table3_restore/.tabledesc/.tableinfo.0000000001 2016-08-18 15:26:34,535 INFO [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(6162): creating HRegion ns3:table3_restore HTD == 'ns3:table3_restore', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/.tmp Table name == ns3:table3_restore 2016-08-18 15:26:34,544 INFO [IPC Server handler 7 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741977_1153{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 45 2016-08-18 15:26:34,948 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(736): Instantiated ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:26:34,949 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1419): Closing ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064.: disabling compactions & flushes 2016-08-18 15:26:34,949 DEBUG [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1446): Updates disabled for region ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:26:34,949 INFO [RegionOpenAndInitThread-ns3:table3_restore-1] regionserver.HRegion(1552): Closed ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:26:35,061 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1374): Put{"totalColumns":1,"families":{"info":[{"timestamp":9223372036854775807,"tag":[],"qualifier":"regioninfo","vlen":44}]},"row":"ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064."} 2016-08-18 15:26:35,062 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:26:35,063 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1571): Added 1 2016-08-18 15:26:35,167 INFO [ProcedureExecutor-7] master.AssignmentManager(726): Assigning 1 region(s) to 10.22.9.171,63282,1471559038490 2016-08-18 15:26:35,168 ERROR [ProcedureExecutor-7] master.TableStateManager(134): Unable to get table ns3:table3_restore state org.apache.hadoop.hbase.TableNotFoundException: ns3:table3_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.isDisabledorDisablingRegionInRIT(AssignmentManager.java:1221) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:739) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1546) at org.apache.hadoop.hbase.util.ModifyRegionUtils.assignRegions(ModifyRegionUtils.java:254) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.assignRegions(CreateTableProcedure.java:430) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:122) at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:47) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1066) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:855) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:808) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:494) 2016-08-18 15:26:35,168 INFO [ProcedureExecutor-7] master.RegionStates(1106): Transition {64e80db997c7530f46efbcdcb1606064 state=OFFLINE, ts=1471559195167, server=null} to {64e80db997c7530f46efbcdcb1606064 state=PENDING_OPEN, ts=1471559195168, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:26:35,168 INFO [ProcedureExecutor-7] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. with state=PENDING_OPEN, sn=10.22.9.171,63282,1471559038490 2016-08-18 15:26:35,169 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:26:35,170 INFO [PriorityRpcServer.handler=3,queue=1,port=63282] regionserver.RSRpcServices(1666): Open ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:26:35,175 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] regionserver.HRegion(6339): Opening region: {ENCODED => 64e80db997c7530f46efbcdcb1606064, NAME => 'ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064.', STARTKEY => '', ENDKEY => ''} 2016-08-18 15:26:35,176 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] regionserver.MetricsRegionSourceImpl(70): Creating new MetricsRegionSourceImpl for table table3_restore 64e80db997c7530f46efbcdcb1606064 2016-08-18 15:26:35,176 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] regionserver.HRegion(736): Instantiated ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:26:35,179 INFO [StoreOpener-64e80db997c7530f46efbcdcb1606064-1] hfile.CacheConfig(292): blockCache=LruBlockCache{blockCount=4, currentSize=1087976, freeSize=1042874328, maxSize=1043962304, heapSize=1087976, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-08-18 15:26:35,179 INFO [StoreOpener-64e80db997c7530f46efbcdcb1606064-1] compactions.CompactionConfiguration(137): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, base window in milliseconds 21600000, windows per tier 4,incoming window min 6 2016-08-18 15:26:35,180 DEBUG [StoreOpener-64e80db997c7530f46efbcdcb1606064-1] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns3/table3_restore/64e80db997c7530f46efbcdcb1606064/f 2016-08-18 15:26:35,180 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] regionserver.HRegion(3881): Found 0 recovered edits file(s) under hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns3/table3_restore/64e80db997c7530f46efbcdcb1606064 2016-08-18 15:26:35,185 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns3/table3_restore/64e80db997c7530f46efbcdcb1606064/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-08-18 15:26:35,185 INFO [RS_OPEN_REGION-10.22.9.171:63282-2] regionserver.HRegion(871): Onlined 64e80db997c7530f46efbcdcb1606064; next sequenceid=2 2016-08-18 15:26:35,186 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559098547 2016-08-18 15:26:35,187 INFO [PostOpenDeployTasks:64e80db997c7530f46efbcdcb1606064] regionserver.HRegionServer(1952): Post open deploy tasks for ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:26:35,188 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] master.AssignmentManager(2884): Got transition OPENED for {64e80db997c7530f46efbcdcb1606064 state=PENDING_OPEN, ts=1471559195168, server=10.22.9.171,63282,1471559038490} from 10.22.9.171,63282,1471559038490 2016-08-18 15:26:35,188 INFO [B.defaultRpcServer.handler=0,queue=0,port=63280] master.RegionStates(1106): Transition {64e80db997c7530f46efbcdcb1606064 state=PENDING_OPEN, ts=1471559195168, server=10.22.9.171,63282,1471559038490} to {64e80db997c7530f46efbcdcb1606064 state=OPEN, ts=1471559195188, server=10.22.9.171,63282,1471559038490} 2016-08-18 15:26:35,188 INFO [B.defaultRpcServer.handler=0,queue=0,port=63280] master.RegionStateStore(207): Updating hbase:meta row ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. with state=OPEN, openSeqNum=2, server=10.22.9.171,63282,1471559038490 2016-08-18 15:26:35,188 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:26:35,189 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=63280] master.RegionStates(452): Onlined 64e80db997c7530f46efbcdcb1606064 on 10.22.9.171,63282,1471559038490 2016-08-18 15:26:35,189 DEBUG [ProcedureExecutor-7] master.AssignmentManager(897): Bulk assigning done for 10.22.9.171,63282,1471559038490 2016-08-18 15:26:35,189 DEBUG [ProcedureExecutor-7] hbase.MetaTableAccessor(1355): Put{"totalColumns":1,"families":{"table":[{"timestamp":1471559195189,"tag":[],"qualifier":"state","vlen":2}]},"row":"ns3:table3_restore"} 2016-08-18 15:26:35,189 ERROR [B.defaultRpcServer.handler=0,queue=0,port=63280] master.TableStateManager(134): Unable to get table ns3:table3_restore state org.apache.hadoop.hbase.TableNotFoundException: ns3:table3_restore at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2311) at org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2891) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1369) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2229) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:136) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 2016-08-18 15:26:35,190 DEBUG [PostOpenDeployTasks:64e80db997c7530f46efbcdcb1606064] regionserver.HRegionServer(1979): Finished post open deploy task for ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:26:35,190 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:26:35,190 DEBUG [RS_OPEN_REGION-10.22.9.171:63282-2] handler.OpenRegionHandler(126): Opened ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. on 10.22.9.171,63282,1471559038490 2016-08-18 15:26:35,191 INFO [ProcedureExecutor-7] hbase.MetaTableAccessor(1700): Updated table ns3:table3_restore state to ENABLED in META 2016-08-18 15:26:35,299 DEBUG [ProcedureExecutor-7] procedure.TruncateTableProcedure(129): truncate 'ns3:table3_restore' completed 2016-08-18 15:26:35,407 DEBUG [ProcedureExecutor-7] lock.ZKInterProcessLockBase(328): Released /1/table-lock/ns3:table3_restore/write-master:632800000000002 2016-08-18 15:26:35,408 DEBUG [ProcedureExecutor-7] procedure2.ProcedureExecutor(870): Procedure completed in 1.6440sec: TruncateTableProcedure (table=ns3:table3_restore preserveSplits=true) id=24 owner=tyu state=FINISHED 2016-08-18 15:26:35,530 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=63280] master.MasterRpcServices(974): Checking to see if procedure is done procId=24 2016-08-18 15:26:35,531 INFO [main] client.HBaseAdmin$TableFuture(3302): Operation: TRUNCATE, Table Name: ns3:table3_restore completed 2016-08-18 15:26:35,531 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:26:35,531 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e731002b 2016-08-18 15:26:35,532 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:26:35,533 INFO [main] impl.RestoreClientImpl(284): Restoring 'ns3:test-14715590609532' to 'ns3:table3_restore' from log dirs: hdfs://localhost:63272/backupUT/backup_1471559097949/WALs 2016-08-18 15:26:35,533 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (614127757) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:26:35,533 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63823 because read count=-1. Number of active connections: 11 2016-08-18 15:26:35,533 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63822 because read count=-1. Number of active connections: 11 2016-08-18 15:26:35,533 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel$8(566): IPC Client (-1529242559) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:26:35,534 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x41988ef5 connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:26:35,536 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x41988ef50x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:26:35,537 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5262b644, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:26:35,537 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:26:35,538 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:26:35,538 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x41988ef5-0x1569fc0e731002c connected 2016-08-18 15:26:35,540 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:26:35,540 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63829; # active connections: 10 2016-08-18 15:26:35,541 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:26:35,541 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63829 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:26:35,542 INFO [main] mapreduce.MapReduceRestoreService(75): Restore incremental backup from directory hdfs://localhost:63272/backupUT/backup_1471559097949/WALs from hbase tables ,ns3:test-14715590609532 to tables ,ns3:table3_restore 2016-08-18 15:26:35,542 INFO [main] mapreduce.MapReduceRestoreService(80): Restore ns3:test-14715590609532 into ns3:table3_restore 2016-08-18 15:26:35,543 DEBUG [main] mapreduce.WALPlayer(307): add incremental job :/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns3-table3_restore-1471559195542 from hdfs://localhost:63272/backupUT/backup_1471559097949/WALs to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns3-table3_restore-1471559195542 2016-08-18 15:26:35,544 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5444ed54 connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:26:35,546 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x5444ed540x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:26:35,547 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5104c20d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:26:35,547 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:26:35,547 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:26:35,548 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x5444ed54-0x1569fc0e731002d connected 2016-08-18 15:26:35,549 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:26:35,549 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63831; # active connections: 11 2016-08-18 15:26:35,552 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:26:35,552 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63831 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:26:35,554 INFO [main] mapreduce.HFileOutputFormat2(478): bulkload locality sensitive enabled 2016-08-18 15:26:35,554 INFO [main] mapreduce.HFileOutputFormat2(483): Looking up current regions for table ns3:test-14715590609532 2016-08-18 15:26:35,557 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:26:35,557 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63832; # active connections: 12 2016-08-18 15:26:35,558 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:26:35,558 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63832 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:26:35,561 INFO [main] mapreduce.HFileOutputFormat2(485): Configuring 1 reduce partitions to match current region count 2016-08-18 15:26:35,561 INFO [main] mapreduce.HFileOutputFormat2(378): Writing partition information to /user/tyu/hbase-staging/partitions_92801828-3d0a-44d8-8b40-28d3eb1f3486 2016-08-18 15:26:35,568 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741978_1154{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:35,570 WARN [main] mapreduce.TableMapReduceUtil(786): The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2016-08-18 15:26:35,603 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2fc1fd9c] blockmanagement.BlockManager(3482): BLOCK* BlockManager: ask 127.0.0.1:63273 to delete [blk_1073741918_1094, blk_1073741919_1095] 2016-08-18 15:26:36,318 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.HConstants, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-744021572439996328.jar 2016-08-18 15:26:38,004 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0002_000001 (auth:SIMPLE) 2016-08-18 15:26:38,367 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-18 15:26:45,441 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-6231910967858573972.jar 2016-08-18 15:26:47,079 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.client.Put, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-4216149376760665642.jar 2016-08-18 15:26:47,125 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-8685970981263601922.jar 2016-08-18 15:26:53,888 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-4195334222481379118.jar 2016-08-18 15:26:53,888 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.zookeeper.ZooKeeper, using jar /Users/tyu/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 2016-08-18 15:26:53,888 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class io.netty.channel.Channel, using jar /Users/tyu/.m2/repository/io/netty/netty-all/4.0.30.Final/netty-all-4.0.30.Final.jar 2016-08-18 15:26:53,889 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.protobuf.Message, using jar /Users/tyu/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 2016-08-18 15:26:53,889 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.collect.Lists, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 15:26:53,889 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.htrace.Trace, using jar /Users/tyu/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 2016-08-18 15:26:53,889 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.codahale.metrics.MetricRegistry, using jar /Users/tyu/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.2/metrics-core-3.1.2.jar 2016-08-18 15:26:54,102 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-1318538006764168560.jar 2016-08-18 15:26:54,102 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-1318538006764168560.jar 2016-08-18 15:26:55,335 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.WALInputFormat, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-2578111083401581760.jar 2016-08-18 15:26:55,335 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-1318538006764168560.jar 2016-08-18 15:26:55,336 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.KeyValue, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-1318538006764168560.jar 2016-08-18 15:26:55,336 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2, using jar /Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/hadoop-2578111083401581760.jar 2016-08-18 15:26:55,337 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner, using jar /Users/tyu/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.2/hadoop-mapreduce-client-core-2.7.2.jar 2016-08-18 15:26:55,337 INFO [main] mapreduce.HFileOutputFormat2(498): Incremental table ns3:test-14715590609532 output configured. 2016-08-18 15:26:55,337 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:26:55,337 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e731002d 2016-08-18 15:26:55,338 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:26:55,338 DEBUG [main] mapreduce.WALPlayer(325): success configuring load incremental job 2016-08-18 15:26:55,339 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63831 because read count=-1. Number of active connections: 12 2016-08-18 15:26:55,339 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (-678798377) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:26:55,339 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63832 because read count=-1. Number of active connections: 12 2016-08-18 15:26:55,339 DEBUG [main] mapreduce.TableMapReduceUtil(920): For class com.google.common.base.Preconditions, using jar /Users/tyu/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar 2016-08-18 15:26:55,339 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel$8(566): IPC Client (-2005189185) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:26:55,361 WARN [main] mapreduce.JobResourceUploader(64): Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-08-18 15:26:55,389 INFO [IPC Server handler 9 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741979_1155{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:55,397 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741980_1156{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:26:55,405 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741981_1157{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:55,412 INFO [IPC Server handler 5 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741982_1158{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:55,418 INFO [IPC Server handler 7 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741983_1159{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:26:55,427 INFO [IPC Server handler 9 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741984_1160{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:26:55,436 INFO [IPC Server handler 3 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741985_1161{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 1531485 2016-08-18 15:26:55,860 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741986_1162{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:55,881 INFO [IPC Server handler 4 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741987_1163{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:55,894 INFO [IPC Server handler 1 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741988_1164{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:26:55,902 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741989_1165{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:55,913 INFO [IPC Server handler 3 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741990_1166{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:55,921 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741991_1167{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:55,930 INFO [IPC Server handler 4 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741992_1168{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:55,931 WARN [main] mapreduce.JobResourceUploader(171): No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-08-18 15:26:55,943 DEBUG [main] mapreduce.WALInputFormat(265): Scanning hdfs://localhost:63272/backupUT/backup_1471559097949/WALs for WAL files 2016-08-18 15:26:55,945 WARN [main] mapreduce.WALInputFormat(289): File hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/.backup.manifest does not appear to be an WAL file. Skipping... 2016-08-18 15:26:55,946 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471559100571; access_time=1471559100563; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:26:55,946 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559040116; isDirectory=false; length=981; replication=1; blocksize=134217728; modification_time=1471559100104; access_time=1471559100092; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:26:55,946 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471559100589; access_time=1471559100580; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:26:55,946 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994; isDirectory=false; length=91; replication=1; blocksize=134217728; modification_time=1471559100612; access_time=1471559100599; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:26:55,946 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559042158; isDirectory=false; length=1629; replication=1; blocksize=134217728; modification_time=1471559100532; access_time=1471559100120; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:26:55,946 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559064398; isDirectory=false; length=10957; replication=1; blocksize=134217728; modification_time=1471559100630; access_time=1471559100622; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:26:55,946 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559069577; isDirectory=false; length=11592; replication=1; blocksize=134217728; modification_time=1471559100553; access_time=1471559100545; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:26:55,947 INFO [main] mapreduce.WALInputFormat(281): Found: LocatedFileStatus{path=hdfs://localhost:63272/backupUT/backup_1471559097949/WALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559065843; isDirectory=false; length=11059; replication=1; blocksize=134217728; modification_time=1471559100646; access_time=1471559100638; owner=tyu; group=supergroup; permission=rw-r--r--; isSymlink=false} 2016-08-18 15:26:55,954 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741993_1169{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:26:55,962 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741994_1170{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:55,975 INFO [IPC Server handler 0 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741995_1171{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:26:56,011 WARN [ResourceManager Event Processor] capacity.LeafQueue(610): maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-18 15:26:56,011 WARN [ResourceManager Event Processor] capacity.LeafQueue(631): maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start 2016-08-18 15:26:56,057 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:26:58,862 DEBUG [10.22.9.171,63282,1471559038490_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 15:26:59,049 INFO [10.22.9.171,63280,1471559038246_ChoreService_1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x743206f7 connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:26:59,054 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x743206f70x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:26:59,055 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6eff5ac5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:26:59,055 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:26:59,055 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:26:59,055 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(580): Has backup sessions from hbase:backup 2016-08-18 15:26:59,056 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0x743206f7-0x1569fc0e731002e connected 2016-08-18 15:26:59,058 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:26:59,058 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63871; # active connections: 11 2016-08-18 15:26:59,059 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:26:59,059 INFO [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63871 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:26:59,062 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:26:59,062 DEBUG [RpcServer.listener,port=63282] ipc.RpcServer$Listener(880): RpcServer.listener,port=63282: connection from 10.22.9.171:63872; # active connections: 7 2016-08-18 15:26:59,063 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:26:59,063 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63872 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:26:59,066 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559040101 2016-08-18 15:26:59,067 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559040101 2016-08-18 15:26:59,067 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590 2016-08-18 15:26:59,067 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559069590 2016-08-18 15:26:59,067 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577 2016-08-18 15:26:59,068 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559069577 2016-08-18 15:26:59,068 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559040101 2016-08-18 15:26:59,069 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559040101 2016-08-18 15:26:59,069 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994 2016-08-18 15:26:59,070 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559069994 2016-08-18 15:26:59,070 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559064398 2016-08-18 15:26:59,071 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559064398 2016-08-18 15:26:59,071 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] impl.BackupSystemTable(560): Check if WAL file has been already backed up in hbase:backup hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559065843 2016-08-18 15:26:59,072 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] master.BackupLogCleaner(77): Found log file in hbase:backup, deleting: hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559065843 2016-08-18 15:26:59,072 INFO [10.22.9.171,63280,1471559038246_ChoreService_1] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e731002e 2016-08-18 15:26:59,073 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:26:59,073 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Listener(912): RpcServer.listener,port=63282: DISCONNECTING client 10.22.9.171:63872 because read count=-1. Number of active connections: 7 2016-08-18 15:26:59,073 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel$8(566): IPC Client (412987752) to /10.22.9.171:63282 from tyu: closed 2016-08-18 15:26:59,073 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel$8(566): IPC Client (-2121122085) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:26:59,073 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63871 because read count=-1. Number of active connections: 11 2016-08-18 15:26:59,075 DEBUG [10.22.9.171,63280,1471559038246_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 15:27:00,953 INFO [Socket Reader #1 for port 63350] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:01,194 INFO [IPC Server handler 3 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741996_1172{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:27:02,354 DEBUG [10.22.9.171,63319,1471559042214_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 15:27:02,370 DEBUG [10.22.9.171,63314,1471559042157_ChoreService_1] throttle.PressureAwareCompactionThroughputController(103): compactionPressure is 0.0, tune compaction throughput to 10.00 MB/sec 2016-08-18 15:27:02,633 DEBUG [region-location-3] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/meta/1588230740/info 2016-08-18 15:27:02,633 DEBUG [region-location-2] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/backup/850b903ca0af513aa15775825a9a082c/meta 2016-08-18 15:27:02,633 DEBUG [region-location-4] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/namespace/a3b1a9605e4887d65b7f50b16f400740/info 2016-08-18 15:27:02,634 DEBUG [region-location-3] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/meta/1588230740/table 2016-08-18 15:27:02,634 DEBUG [region-location-2] regionserver.HRegionFileSystem(202): No StoreFiles for: hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/backup/850b903ca0af513aa15775825a9a082c/session 2016-08-18 15:27:03,175 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:03,175 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:04,032 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:04,033 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:05,042 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:06,052 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:07,436 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:07,458 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0003_01_000002 is : 143 2016-08-18 15:27:09,080 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:09,222 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:09,243 WARN [ContainersLauncher #3] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0003_01_000003 is : 143 2016-08-18 15:27:09,243 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:09,264 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0003_01_000004 is : 143 2016-08-18 15:27:09,278 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:09,299 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0003_01_000005 is : 143 2016-08-18 15:27:10,082 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:10,199 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:10,221 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0003_01_000006 is : 143 2016-08-18 15:27:10,705 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:10,725 WARN [ContainersLauncher #0] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0003_01_000007 is : 143 2016-08-18 15:27:12,103 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:12,327 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:12,343 WARN [ContainersLauncher #2] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0003_01_000008 is : 143 2016-08-18 15:27:12,714 WARN [AsyncDispatcher event handler] containermanager.ContainerManagerImpl$ContainerEventDispatcher(1080): Event EventType: KILL_CONTAINER sent to absent container container_1471559057429_0003_01_000010 2016-08-18 15:27:13,063 INFO [Socket Reader #1 for port 63359] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:13,078 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0003_01_000009 is : 143 2016-08-18 15:27:14,962 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:14,977 WARN [ContainersLauncher #1] nodemanager.DefaultContainerExecutor(224): Exit code from container container_1471559057429_0003_01_000011 is : 143 2016-08-18 15:27:15,001 INFO [IPC Server handler 5 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741997_1173{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 16349 2016-08-18 15:27:15,009 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741998_1174{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:27:15,030 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741999_1175{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:27:15,047 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073742000_1176{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:27:15,945 INFO [IPC Server handler 4 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741998_1174 127.0.0.1:63273 2016-08-18 15:27:15,948 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741946_1122 127.0.0.1:63273 2016-08-18 15:27:15,948 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741973_1149 127.0.0.1:63273 2016-08-18 15:27:16,068 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741993_1169 127.0.0.1:63273 2016-08-18 15:27:16,069 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741994_1170 127.0.0.1:63273 2016-08-18 15:27:16,069 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741995_1171 127.0.0.1:63273 2016-08-18 15:27:16,069 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741997_1173 127.0.0.1:63273 2016-08-18 15:27:16,069 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741996_1172 127.0.0.1:63273 2016-08-18 15:27:16,069 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741992_1168 127.0.0.1:63273 2016-08-18 15:27:16,069 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741989_1165 127.0.0.1:63273 2016-08-18 15:27:16,069 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741986_1162 127.0.0.1:63273 2016-08-18 15:27:16,069 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741979_1155 127.0.0.1:63273 2016-08-18 15:27:16,069 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741984_1160 127.0.0.1:63273 2016-08-18 15:27:16,070 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741987_1163 127.0.0.1:63273 2016-08-18 15:27:16,070 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741981_1157 127.0.0.1:63273 2016-08-18 15:27:16,070 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741982_1158 127.0.0.1:63273 2016-08-18 15:27:16,070 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741985_1161 127.0.0.1:63273 2016-08-18 15:27:16,070 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741988_1164 127.0.0.1:63273 2016-08-18 15:27:16,070 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741983_1159 127.0.0.1:63273 2016-08-18 15:27:16,070 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741990_1166 127.0.0.1:63273 2016-08-18 15:27:16,070 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741980_1156 127.0.0.1:63273 2016-08-18 15:27:16,070 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741991_1167 127.0.0.1:63273 2016-08-18 15:27:16,264 DEBUG [main] mapreduce.MapReduceRestoreService(101): Restoring HFiles from directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns3-table3_restore-1471559195542 2016-08-18 15:27:16,265 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xc21a9b0 connecting to ZooKeeper ensemble=localhost:61765 2016-08-18 15:27:16,269 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0xc21a9b00x0, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-08-18 15:27:16,270 DEBUG [main] ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7a8447b4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-08-18 15:27:16,271 DEBUG [main] ipc.AsyncRpcClient(160): Starting async Hbase RPC client 2016-08-18 15:27:16,271 DEBUG [main] ipc.AsyncRpcClient(171): Use global event loop group NioEventLoopGroup 2016-08-18 15:27:16,271 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(674): hconnection-0xc21a9b0-0x1569fc0e731002f connected 2016-08-18 15:27:16,273 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service ClientService, sasl=false 2016-08-18 15:27:16,273 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63951; # active connections: 11 2016-08-18 15:27:16,273 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:27:16,274 INFO [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63951 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:27:16,279 DEBUG [main] client.ConnectionImplementation(604): Table ns3:table3_restore should be available 2016-08-18 15:27:16,281 WARN [main] mapreduce.LoadIncrementalHFiles(199): Skipping non-directory hdfs://localhost:63272/var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns3-table3_restore-1471559195542/_SUCCESS 2016-08-18 15:27:16,282 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel(479): Use SIMPLE authentication for service MasterService, sasl=false 2016-08-18 15:27:16,282 DEBUG [RpcServer.listener,port=63280] ipc.RpcServer$Listener(880): RpcServer.listener,port=63280: connection from 10.22.9.171:63952; # active connections: 12 2016-08-18 15:27:16,283 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1710): Auth successful for tyu (auth:SIMPLE) 2016-08-18 15:27:16,283 INFO [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Connection(1740): Connection from 10.22.9.171 port: 63952 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/upstream-backup" revision: "5ab7d3bd67d0ab62adb64e7bc07e92d27eb32694" user: "tyu" date: "Thu Aug 18 15:23:42 PDT 2016" src_checksum: "3b4dae670e9e546c1c9da48d852a2b1c" version_major: 2 version_minor: 0 2016-08-18 15:27:16,285 WARN [main] mapreduce.LoadIncrementalHFiles(350): Bulk load operation did not find any files to load in directory /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/hbase-tyu/bulk_output-ns3-table3_restore-1471559195542. Does it contain files in subdirectories that correspond to column family names? 2016-08-18 15:27:16,285 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:27:16,285 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e731002f 2016-08-18 15:27:16,286 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:27:16,286 DEBUG [main] mapreduce.MapReduceRestoreService(113): Restore Job finished:0 2016-08-18 15:27:16,286 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e731002c 2016-08-18 15:27:16,286 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63952 because read count=-1. Number of active connections: 12 2016-08-18 15:27:16,286 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63951 because read count=-1. Number of active connections: 12 2016-08-18 15:27:16,286 DEBUG [AsyncRpcChannel-pool2-t8] ipc.AsyncRpcChannel$8(566): IPC Client (1763037915) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:27:16,286 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (1945854258) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:27:16,287 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:27:16,288 INFO [main] impl.RestoreClientImpl(292): ns3:test-14715590609532 has been successfully restored to ns3:table3_restore 2016-08-18 15:27:16,288 INFO [main] impl.RestoreClientImpl(220): Restore includes the following image(s): 2016-08-18 15:27:16,288 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471559069358 hdfs://localhost:63272/backupUT/backup_1471559069358/ns3/test-14715590609532/ 2016-08-18 15:27:16,288 INFO [main] impl.RestoreClientImpl(222): Backup: backup_1471559097949 hdfs://localhost:63272/backupUT/backup_1471559097949/ns3/test-14715590609532/ 2016-08-18 15:27:16,288 DEBUG [main] impl.RestoreClientImpl(234): restoreStage finished 2016-08-18 15:27:16,288 INFO [main] impl.RestoreClientImpl(108): Restore for [ns1:test-1471559060953, ns2:test-14715590609531, ns3:test-14715590609532] are successful! 2016-08-18 15:27:16,288 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63829 because read count=-1. Number of active connections: 10 2016-08-18 15:27:16,288 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (1906845753) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:27:16,375 INFO [main] hbase.ResourceChecker(172): after: backup.TestIncrementalBackup#TestIncBackupRestore Thread=879 (was 791) Potentially hanging thread: LogDeleter #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: LogDeleter #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: Async disk worker #0 for volume /Users/tyu/upstream-backup/hbase-server/target/test-data/1d1f5955-4d11-4726-93f5-8aac3e385d0f/dfscluster_2df11bdc-bdca-4f31-985d-3dff80836cf4/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-132219022_1 at /127.0.0.1:63943 [Waiting for operation #3] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: Async disk worker #0 for volume /Users/tyu/upstream-backup/hbase-server/target/test-data/1d1f5955-4d11-4726-93f5-8aac3e385d0f/dfscluster_2df11bdc-bdca-4f31-985d-3dff80836cf4/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1801930766_1 at /127.0.0.1:63947 [Waiting for operation #3] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63282-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: Timer for 'JobHistoryServer' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: B.defaultRpcServer.handler=1,queue=0,port=63280-SendThread(localhost:61765) sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1859428835_1 at /127.0.0.1:63452 [Receiving block BP-1495083349-10.22.9.171-1471559035413:blk_1073741890_1066] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:895) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:801) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: MASTER_TABLE_OPERATIONS-10.22.9.171:63280-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: B.defaultRpcServer.handler=1,queue=0,port=63280-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494) Potentially hanging thread: rs(10.22.9.171,63280,1471559038246)-backup-pool30-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: PacketResponder: BP-1495083349-10.22.9.171-1471559035413:blk_1073741887_1063, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1230) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1301) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t14 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x61ff4593-shared-pool33-t118 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: member: '10.22.9.171,63282,1471559038490' subprocedure-pool2-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ResponseProcessor for block BP-1495083349-10.22.9.171-1471559035413:blk_1073741887_1063 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:734) Potentially hanging thread: PacketResponder: BP-1495083349-10.22.9.171-1471559035413:blk_1073741886_1062, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1230) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1301) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1801930766_1 at /127.0.0.1:63948 [Waiting for operation #3] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_CLOSE_REGION-10.22.9.171:63282-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x61ff4593-shared-pool33-t119 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: (10.22.9.171,63280,1471559038246)-proc-coordinator-pool6-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63282-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1859428835_1 at /127.0.0.1:63448 [Receiving block BP-1495083349-10.22.9.171-1471559035413:blk_1073741886_1062] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:895) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:801) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_CLOSE_REGION-10.22.9.171:63282-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: Thread-4429 java.io.FileInputStream.readBytes(Native Method) java.io.FileInputStream.read(FileInputStream.java:272) java.io.BufferedInputStream.read1(BufferedInputStream.java:273) java.io.BufferedInputStream.read(BufferedInputStream.java:334) sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) java.io.InputStreamReader.read(InputStreamReader.java:184) java.io.BufferedReader.fill(BufferedReader.java:154) java.io.BufferedReader.readLine(BufferedReader.java:317) java.io.BufferedReader.readLine(BufferedReader.java:382) org.apache.hadoop.util.Shell$1.run(Shell.java:510) Potentially hanging thread: AsyncRpcChannel-pool2-t12 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1859428835_1 at /127.0.0.1:63451 [Receiving block BP-1495083349-10.22.9.171-1471559035413:blk_1073741889_1065] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:895) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:801) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1823871233_1 at /127.0.0.1:63447 [Receiving block BP-1495083349-10.22.9.171-1471559035413:blk_1073741885_1061] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:895) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:801) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ResponseProcessor for block BP-1495083349-10.22.9.171-1471559035413:blk_1073741886_1062 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:734) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63314-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: rs(10.22.9.171,63280,1471559038246)-backup-pool19-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1859428835_1 at /127.0.0.1:63449 [Receiving block BP-1495083349-10.22.9.171-1471559035413:blk_1073741887_1063] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:895) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:801) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1823871233_1 at /127.0.0.1:63450 [Receiving block BP-1495083349-10.22.9.171-1471559035413:blk_1073741888_1064] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read1(BufferedInputStream.java:275) java.io.BufferedInputStream.read(BufferedInputStream.java:334) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:895) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:801) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t10 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t13 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x61ff4593-shared-pool33-t117 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: MoveIntermediateToDone Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #3 java.io.FileInputStream.readBytes(Native Method) java.io.FileInputStream.read(FileInputStream.java:272) java.io.BufferedInputStream.read1(BufferedInputStream.java:273) java.io.BufferedInputStream.read(BufferedInputStream.java:334) sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) java.io.InputStreamReader.read(InputStreamReader.java:184) java.io.BufferedReader.fill(BufferedReader.java:154) java.io.BufferedReader.read1(BufferedReader.java:205) java.io.BufferedReader.read(BufferedReader.java:279) org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:735) org.apache.hadoop.util.Shell.runCommand(Shell.java:531) org.apache.hadoop.util.Shell.run(Shell.java:456) org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722) org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) java.util.concurrent.FutureTask.run(FutureTask.java:262) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AsyncRpcChannel-pool2-t16 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63319-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: rs(10.22.9.171,63282,1471559038490)-backup-pool20-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: PacketResponder: BP-1495083349-10.22.9.171-1471559035413:blk_1073741889_1065, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1230) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1301) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: member: '10.22.9.171,63280,1471559038246' subprocedure-pool4-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559098969 block BP-1495083349-10.22.9.171-1471559035413:blk_1073741889_1065 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:418) Potentially hanging thread: ResponseProcessor for block BP-1495083349-10.22.9.171-1471559035413:blk_1073741889_1065 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:734) Potentially hanging thread: MoveIntermediateToDone Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559098548 block BP-1495083349-10.22.9.171-1471559035413:blk_1073741888_1064 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:418) Potentially hanging thread: DeletionService #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ResponseProcessor for block BP-1495083349-10.22.9.171-1471559035413:blk_1073741890_1066 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:734) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63314-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63314-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63280-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ContainersLauncher #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: B.defaultRpcServer.handler=1,queue=0,port=63280-SendThread(localhost:61765) sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) Potentially hanging thread: RS_CLOSE_REGION-10.22.9.171:63282-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: (10.22.9.171,63280,1471559038246)-proc-coordinator-pool1-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559098118 block BP-1495083349-10.22.9.171-1471559035413:blk_1073741885_1061 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:418) Potentially hanging thread: AsyncRpcChannel-pool2-t11 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: IPC Client (387040080) connection to /10.22.9.171:63874 from tyu java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:933) org.apache.hadoop.ipc.Client$Connection.run(Client.java:978) Potentially hanging thread: member: '10.22.9.171,63282,1471559038490' subprocedure-pool5-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: PacketResponder: BP-1495083349-10.22.9.171-1471559035413:blk_1073741890_1066, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1230) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1301) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63280-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: LogDeleter #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1085) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: member: '10.22.9.171,63280,1471559038246' subprocedure-pool3-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:925) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63282-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63280-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x61ff4593-shared-pool33-t120 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63282-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559098118 block BP-1495083349-10.22.9.171-1471559035413:blk_1073741886_1062 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:418) Potentially hanging thread: ApplicationMasterLauncher #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63282-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1801930766_1 at /127.0.0.1:63953 [Waiting for operation #2] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: PacketResponder: BP-1495083349-10.22.9.171-1471559035413:blk_1073741885_1061, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1230) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1301) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1801930766_1 at /127.0.0.1:63949 [Waiting for operation #3] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: B.defaultRpcServer.handler=1,queue=0,port=63280-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494) Potentially hanging thread: ResponseProcessor for block BP-1495083349-10.22.9.171-1471559035413:blk_1073741885_1061 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:734) Potentially hanging thread: AsyncRpcChannel-pool2-t15 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63282-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63282-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: PacketResponder: BP-1495083349-10.22.9.171-1471559035413:blk_1073741888_1064, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:503) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1230) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1301) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 block BP-1495083349-10.22.9.171-1471559035413:blk_1073741890_1066 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:418) Potentially hanging thread: rs(10.22.9.171,63282,1471559038490)-backup-pool29-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63319-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: MoveIntermediateToDone Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ResponseProcessor for block BP-1495083349-10.22.9.171-1471559035413:blk_1073741888_1064 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:734) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63282-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63282-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-10.22.9.171:63282-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataStreamer for file /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559098547 block BP-1495083349-10.22.9.171-1471559035413:blk_1073741887_1063 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:418) Potentially hanging thread: LogDeleter #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1085) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DeletionService #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: ApplicationMasterLauncher #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) - Thread LEAK? -, OpenFileDescriptor=1159 (was 1032) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=10240 (was 10240), SystemLoadAverage=382 (was 249) - SystemLoadAverage LEAK? -, ProcessCount=276 (was 273) - ProcessCount LEAK? -, AvailableMemoryMB=1230 (was 293) - AvailableMemoryMB LEAK? - 2016-08-18 15:27:16,376 WARN [main] hbase.ResourceChecker(135): Thread=879 is superior to 500 2016-08-18 15:27:16,376 WARN [main] hbase.ResourceChecker(135): OpenFileDescriptor=1159 is superior to 1024 2016-08-18 15:27:16,429 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741914_1090 127.0.0.1:63273 2016-08-18 15:27:16,430 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(1103): BLOCK* addToInvalidates: blk_1073741917_1093 127.0.0.1:63273 2016-08-18 15:27:16,430 INFO [main] hbase.HBaseTestingUtility(1142): Shutting down minicluster 2016-08-18 15:27:16,431 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e731000b 2016-08-18 15:27:16,431 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:27:16,432 DEBUG [main] util.JVMClusterUtil(241): Shutting down HBase Cluster 2016-08-18 15:27:16,432 DEBUG [AsyncRpcChannel-pool2-t5] ipc.AsyncRpcChannel$8(566): IPC Client (-1286503559) to /10.22.9.171:63314 from tyu: closed 2016-08-18 15:27:16,432 DEBUG [main] coprocessor.CoprocessorHost(271): Stop coprocessor org.apache.hadoop.hbase.backup.master.BackupController 2016-08-18 15:27:16,432 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63314] ipc.RpcServer$Listener(912): RpcServer.listener,port=63314: DISCONNECTING client 10.22.9.171:63336 because read count=-1. Number of active connections: 2 2016-08-18 15:27:16,432 INFO [main] regionserver.HRegionServer(1918): STOPPED: Cluster shutdown requested 2016-08-18 15:27:16,433 INFO [M:0;10.22.9.171:63314] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-08-18 15:27:16,433 INFO [SplitLogWorker-10.22.9.171:63314] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-08-18 15:27:16,433 INFO [M:0;10.22.9.171:63314] regionserver.HeapMemoryManager(202): Stoping HeapMemoryTuner chore. 2016-08-18 15:27:16,433 INFO [SplitLogWorker-10.22.9.171:63314] regionserver.SplitLogWorker(155): SplitLogWorker 10.22.9.171,63314,1471559042157 exiting 2016-08-18 15:27:16,434 INFO [M:0;10.22.9.171:63314] procedure2.ProcedureExecutor(532): Stopping the procedure executor 2016-08-18 15:27:16,434 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.0 exiting 2016-08-18 15:27:16,434 INFO [M:0;10.22.9.171:63314] wal.WALProcedureStore(232): Stopping the WAL Procedure Store 2016-08-18 15:27:16,434 INFO [main] regionserver.HRegionServer(1918): STOPPED: Shutdown requested 2016-08-18 15:27:16,434 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63314-0x1569fc0e7310006, quorum=localhost:61765, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/running 2016-08-18 15:27:16,434 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.1 exiting 2016-08-18 15:27:16,434 INFO [RS:0;10.22.9.171:63319] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-08-18 15:27:16,434 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:63319-0x1569fc0e7310007, quorum=localhost:61765, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/running 2016-08-18 15:27:16,434 INFO [RS:0;10.22.9.171:63319] regionserver.HeapMemoryManager(202): Stoping HeapMemoryTuner chore. 2016-08-18 15:27:16,434 INFO [SplitLogWorker-10.22.9.171:63319] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-08-18 15:27:16,434 INFO [SplitLogWorker-10.22.9.171:63319] regionserver.SplitLogWorker(155): SplitLogWorker 10.22.9.171,63319,1471559042214 exiting 2016-08-18 15:27:16,435 INFO [RS:0;10.22.9.171:63319] regionserver.LogRollRegionServerProcedureManager(96): Stopping RegionServerBackupManager gracefully. 2016-08-18 15:27:16,434 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:63314-0x1569fc0e7310006, quorum=localhost:61765, baseZNode=/2 Set watcher on znode that does not yet exist, /2/running 2016-08-18 15:27:16,435 INFO [RS:0;10.22.9.171:63319] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-08-18 15:27:16,435 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:63319-0x1569fc0e7310007, quorum=localhost:61765, baseZNode=/2 Set watcher on znode that does not yet exist, /2/running 2016-08-18 15:27:16,435 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.1 exiting 2016-08-18 15:27:16,434 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.0 exiting 2016-08-18 15:27:16,435 INFO [RS:0;10.22.9.171:63319] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-08-18 15:27:16,436 INFO [RS:0;10.22.9.171:63319] regionserver.HRegionServer(1063): stopping server 10.22.9.171,63319,1471559042214 2016-08-18 15:27:16,436 DEBUG [RS:0;10.22.9.171:63319] zookeeper.MetaTableLocator(612): Stopping MetaTableLocator 2016-08-18 15:27:16,436 DEBUG [RS_CLOSE_REGION-10.22.9.171:63319-0] handler.CloseRegionHandler(90): Processing close of hbase:backup,,1471559044678.850b903ca0af513aa15775825a9a082c. 2016-08-18 15:27:16,436 INFO [RS:0;10.22.9.171:63319] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310008 2016-08-18 15:27:16,436 DEBUG [RS_CLOSE_REGION-10.22.9.171:63319-0] regionserver.HRegion(1419): Closing hbase:backup,,1471559044678.850b903ca0af513aa15775825a9a082c.: disabling compactions & flushes 2016-08-18 15:27:16,437 DEBUG [RS_CLOSE_REGION-10.22.9.171:63319-0] regionserver.HRegion(1446): Updates disabled for region hbase:backup,,1471559044678.850b903ca0af513aa15775825a9a082c. 2016-08-18 15:27:16,437 INFO [StoreCloserThread-hbase:backup,,1471559044678.850b903ca0af513aa15775825a9a082c.-1] regionserver.HStore(839): Closed meta 2016-08-18 15:27:16,437 DEBUG [RS:0;10.22.9.171:63319] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:27:16,437 INFO [StoreCloserThread-hbase:backup,,1471559044678.850b903ca0af513aa15775825a9a082c.-1] regionserver.HStore(839): Closed session 2016-08-18 15:27:16,437 INFO [RS:0;10.22.9.171:63319] regionserver.HRegionServer(1292): Waiting on 1 regions to close 2016-08-18 15:27:16,437 DEBUG [RS:0;10.22.9.171:63319] regionserver.HRegionServer(1296): {850b903ca0af513aa15775825a9a082c=hbase:backup,,1471559044678.850b903ca0af513aa15775825a9a082c.} 2016-08-18 15:27:16,438 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/WALs/10.22.9.171,63319,1471559042214/10.22.9.171%2C63319%2C1471559042214.regiongroup-1.1471559045756 2016-08-18 15:27:16,444 DEBUG [RS_CLOSE_REGION-10.22.9.171:63319-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/backup/850b903ca0af513aa15775825a9a082c/recovered.edits/4.seqid to file, newSeqId=4, maxSeqId=2 2016-08-18 15:27:16,445 INFO [RS_CLOSE_REGION-10.22.9.171:63319-0] regionserver.HRegion(1552): Closed hbase:backup,,1471559044678.850b903ca0af513aa15775825a9a082c. 2016-08-18 15:27:16,445 DEBUG [RS_CLOSE_REGION-10.22.9.171:63319-0] handler.CloseRegionHandler(122): Closed hbase:backup,,1471559044678.850b903ca0af513aa15775825a9a082c. 2016-08-18 15:27:16,487 INFO [IPC Server handler 1 on 63303] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63306 is added to blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-8d2e8b28-58cc-488d-aeb0-84469d8ca908:NORMAL:127.0.0.1:63306|RBW]]} size 465 2016-08-18 15:27:16,489 INFO [M:0;10.22.9.171:63314] regionserver.LogRollRegionServerProcedureManager(96): Stopping RegionServerBackupManager gracefully. 2016-08-18 15:27:16,489 INFO [M:0;10.22.9.171:63314] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-08-18 15:27:16,489 INFO [M:0;10.22.9.171:63314] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-08-18 15:27:16,490 INFO [M:0;10.22.9.171:63314] regionserver.HRegionServer(1063): stopping server 10.22.9.171,63314,1471559042157 2016-08-18 15:27:16,490 DEBUG [M:0;10.22.9.171:63314] zookeeper.MetaTableLocator(612): Stopping MetaTableLocator 2016-08-18 15:27:16,490 DEBUG [RS_CLOSE_REGION-10.22.9.171:63314-0] handler.CloseRegionHandler(90): Processing close of hbase:namespace,,1471559042631.a3b1a9605e4887d65b7f50b16f400740. 2016-08-18 15:27:16,490 INFO [M:0;10.22.9.171:63314] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310009 2016-08-18 15:27:16,490 DEBUG [RS_CLOSE_REGION-10.22.9.171:63314-0] regionserver.HRegion(1419): Closing hbase:namespace,,1471559042631.a3b1a9605e4887d65b7f50b16f400740.: disabling compactions & flushes 2016-08-18 15:27:16,490 DEBUG [RS_CLOSE_REGION-10.22.9.171:63314-0] regionserver.HRegion(1446): Updates disabled for region hbase:namespace,,1471559042631.a3b1a9605e4887d65b7f50b16f400740. 2016-08-18 15:27:16,490 INFO [RS_CLOSE_REGION-10.22.9.171:63314-0] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=344 B 2016-08-18 15:27:16,491 DEBUG [M:0;10.22.9.171:63314] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:27:16,491 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/WALs/10.22.9.171,63314,1471559042157/10.22.9.171%2C63314%2C1471559042157.regiongroup-0.1471559043123 2016-08-18 15:27:16,491 INFO [M:0;10.22.9.171:63314] regionserver.CompactSplitThread(403): Waiting for Split Thread to finish... 2016-08-18 15:27:16,491 INFO [M:0;10.22.9.171:63314] regionserver.CompactSplitThread(403): Waiting for Merge Thread to finish... 2016-08-18 15:27:16,491 DEBUG [AsyncRpcChannel-pool2-t6] ipc.AsyncRpcChannel$8(566): IPC Client (-1017881840) to /10.22.9.171:63319 from tyu: closed 2016-08-18 15:27:16,491 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63319] ipc.RpcServer$Listener(912): RpcServer.listener,port=63319: DISCONNECTING client 10.22.9.171:63341 because read count=-1. Number of active connections: 1 2016-08-18 15:27:16,491 INFO [M:0;10.22.9.171:63314] regionserver.CompactSplitThread(403): Waiting for Large Compaction Thread to finish... 2016-08-18 15:27:16,492 INFO [M:0;10.22.9.171:63314] regionserver.CompactSplitThread(403): Waiting for Small Compaction Thread to finish... 2016-08-18 15:27:16,492 INFO [M:0;10.22.9.171:63314] regionserver.HRegionServer(1292): Waiting on 2 regions to close 2016-08-18 15:27:16,492 DEBUG [M:0;10.22.9.171:63314] regionserver.HRegionServer(1296): {a3b1a9605e4887d65b7f50b16f400740=hbase:namespace,,1471559042631.a3b1a9605e4887d65b7f50b16f400740., 1588230740=hbase:meta,,1.1588230740} 2016-08-18 15:27:16,492 DEBUG [RS_CLOSE_META-10.22.9.171:63314-0] handler.CloseRegionHandler(90): Processing close of hbase:meta,,1.1588230740 2016-08-18 15:27:16,493 DEBUG [RS_CLOSE_META-10.22.9.171:63314-0] regionserver.HRegion(1419): Closing hbase:meta,,1.1588230740: disabling compactions & flushes 2016-08-18 15:27:16,493 DEBUG [RS_CLOSE_META-10.22.9.171:63314-0] regionserver.HRegion(1446): Updates disabled for region hbase:meta,,1.1588230740 2016-08-18 15:27:16,493 INFO [RS_CLOSE_META-10.22.9.171:63314-0] regionserver.HRegion(2345): Flushing 2/2 column families, memstore=4.02 KB 2016-08-18 15:27:16,494 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/WALs/10.22.9.171,63314,1471559042157.meta/10.22.9.171%2C63314%2C1471559042157.meta.regiongroup-0.1471559042405 2016-08-18 15:27:16,501 INFO [IPC Server handler 1 on 63303] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63306 is added to blk_1073741839_1015{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f7c928e6-7708-4ce3-9a57-779614366990:NORMAL:127.0.0.1:63306|RBW]]} size 0 2016-08-18 15:27:16,502 INFO [RS_CLOSE_REGION-10.22.9.171:63314-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=6, memsize=344, hasBloomFilter=true, into tmp file hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/namespace/a3b1a9605e4887d65b7f50b16f400740/.tmp/002ce039d22c4b6b8bb6368a42fb8887 2016-08-18 15:27:16,502 INFO [IPC Server handler 7 on 63303] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63306 is added to blk_1073741840_1016{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-8d2e8b28-58cc-488d-aeb0-84469d8ca908:NORMAL:127.0.0.1:63306|RBW]]} size 6350 2016-08-18 15:27:16,510 DEBUG [RS_CLOSE_REGION-10.22.9.171:63314-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/namespace/a3b1a9605e4887d65b7f50b16f400740/.tmp/002ce039d22c4b6b8bb6368a42fb8887 as hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/namespace/a3b1a9605e4887d65b7f50b16f400740/info/002ce039d22c4b6b8bb6368a42fb8887 2016-08-18 15:27:16,516 INFO [RS_CLOSE_REGION-10.22.9.171:63314-0] regionserver.HStore(934): Added hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/namespace/a3b1a9605e4887d65b7f50b16f400740/info/002ce039d22c4b6b8bb6368a42fb8887, entries=2, sequenceid=6, filesize=4.8 K 2016-08-18 15:27:16,516 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/WALs/10.22.9.171,63314,1471559042157/10.22.9.171%2C63314%2C1471559042157.regiongroup-0.1471559043123 2016-08-18 15:27:16,517 INFO [RS_CLOSE_REGION-10.22.9.171:63314-0] regionserver.HRegion(2545): Finished memstore flush of ~344 B/344, currentsize=0 B/0 for region hbase:namespace,,1471559042631.a3b1a9605e4887d65b7f50b16f400740. in 27ms, sequenceid=6, compaction requested=false 2016-08-18 15:27:16,518 INFO [StoreCloserThread-hbase:namespace,,1471559042631.a3b1a9605e4887d65b7f50b16f400740.-1] regionserver.HStore(839): Closed info 2016-08-18 15:27:16,518 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/WALs/10.22.9.171,63314,1471559042157/10.22.9.171%2C63314%2C1471559042157.regiongroup-0.1471559043123 2016-08-18 15:27:16,522 DEBUG [RS_CLOSE_REGION-10.22.9.171:63314-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/namespace/a3b1a9605e4887d65b7f50b16f400740/recovered.edits/9.seqid to file, newSeqId=9, maxSeqId=2 2016-08-18 15:27:16,523 INFO [RS_CLOSE_REGION-10.22.9.171:63314-0] regionserver.HRegion(1552): Closed hbase:namespace,,1471559042631.a3b1a9605e4887d65b7f50b16f400740. 2016-08-18 15:27:16,523 DEBUG [RS_CLOSE_REGION-10.22.9.171:63314-0] handler.CloseRegionHandler(122): Closed hbase:namespace,,1471559042631.a3b1a9605e4887d65b7f50b16f400740. 2016-08-18 15:27:16,641 INFO [RS:0;10.22.9.171:63319] regionserver.HRegionServer(1091): stopping server 10.22.9.171,63319,1471559042214; all regions closed. 2016-08-18 15:27:16,641 DEBUG [RS:0;10.22.9.171:63319] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/WALs/10.22.9.171,63319,1471559042214 2016-08-18 15:27:16,641 DEBUG [RS:0;10.22.9.171:63319] wal.FSHLog(1090): closing hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/WALs/10.22.9.171,63319,1471559042214/10.22.9.171%2C63319%2C1471559042214.regiongroup-0.1471559044405 2016-08-18 15:27:16,652 INFO [IPC Server handler 0 on 63303] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63306 is added to blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f7c928e6-7708-4ce3-9a57-779614366990:NORMAL:127.0.0.1:63306|RBW]]} size 83 2016-08-18 15:27:16,655 DEBUG [RS:0;10.22.9.171:63319] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/oldWALs 2016-08-18 15:27:16,655 INFO [RS:0;10.22.9.171:63319] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C63319%2C1471559042214.regiongroup-0:(num 1471559044405) 2016-08-18 15:27:16,655 DEBUG [RS:0;10.22.9.171:63319] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/WALs/10.22.9.171,63319,1471559042214 2016-08-18 15:27:16,655 DEBUG [RS:0;10.22.9.171:63319] wal.FSHLog(1090): closing hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/WALs/10.22.9.171,63319,1471559042214/10.22.9.171%2C63319%2C1471559042214.regiongroup-1.1471559045756 2016-08-18 15:27:16,658 INFO [IPC Server handler 2 on 63303] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63306 is added to blk_1073741838_1014{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-8d2e8b28-58cc-488d-aeb0-84469d8ca908:NORMAL:127.0.0.1:63306|RBW]]} size 669 2016-08-18 15:27:16,873 INFO [master//10.22.9.171:0.logRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-18 15:27:16,873 INFO [regionserver//10.22.9.171:0.logRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-18 15:27:16,873 INFO [RS_OPEN_META-10.22.9.171:63314-0-MetaLogRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-18 15:27:16,874 INFO [regionserver//10.22.9.171:0.leaseChecker] regionserver.Leases(146): regionserver//10.22.9.171:0.leaseChecker closing leases 2016-08-18 15:27:16,874 INFO [regionserver//10.22.9.171:0.leaseChecker] regionserver.Leases(149): regionserver//10.22.9.171:0.leaseChecker closed leases 2016-08-18 15:27:16,895 INFO [master//10.22.9.171:0.leaseChecker] regionserver.Leases(146): master//10.22.9.171:0.leaseChecker closing leases 2016-08-18 15:27:16,895 INFO [master//10.22.9.171:0.leaseChecker] regionserver.Leases(149): master//10.22.9.171:0.leaseChecker closed leases 2016-08-18 15:27:16,909 INFO [RS_CLOSE_META-10.22.9.171:63314-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=15, memsize=3.3 K, hasBloomFilter=false, into tmp file hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/meta/1588230740/.tmp/e8fffd36743b4434ba32264e49fda93c 2016-08-18 15:27:16,925 INFO [IPC Server handler 8 on 63303] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63306 is added to blk_1073741841_1017{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f7c928e6-7708-4ce3-9a57-779614366990:NORMAL:127.0.0.1:63306|RBW]]} size 0 2016-08-18 15:27:16,926 INFO [RS_CLOSE_META-10.22.9.171:63314-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=15, memsize=704, hasBloomFilter=false, into tmp file hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/meta/1588230740/.tmp/10a384cbd2374f3a9aedb6d470fa678b 2016-08-18 15:27:16,933 DEBUG [RS_CLOSE_META-10.22.9.171:63314-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/meta/1588230740/.tmp/e8fffd36743b4434ba32264e49fda93c as hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/meta/1588230740/info/e8fffd36743b4434ba32264e49fda93c 2016-08-18 15:27:16,939 INFO [RS_CLOSE_META-10.22.9.171:63314-0] regionserver.HStore(934): Added hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/meta/1588230740/info/e8fffd36743b4434ba32264e49fda93c, entries=14, sequenceid=15, filesize=6.2 K 2016-08-18 15:27:16,940 DEBUG [RS_CLOSE_META-10.22.9.171:63314-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/meta/1588230740/.tmp/10a384cbd2374f3a9aedb6d470fa678b as hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/meta/1588230740/table/10a384cbd2374f3a9aedb6d470fa678b 2016-08-18 15:27:16,947 INFO [RS_CLOSE_META-10.22.9.171:63314-0] regionserver.HStore(934): Added hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/meta/1588230740/table/10a384cbd2374f3a9aedb6d470fa678b, entries=4, sequenceid=15, filesize=4.7 K 2016-08-18 15:27:16,948 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/WALs/10.22.9.171,63314,1471559042157.meta/10.22.9.171%2C63314%2C1471559042157.meta.regiongroup-0.1471559042405 2016-08-18 15:27:16,948 INFO [RS_CLOSE_META-10.22.9.171:63314-0] regionserver.HRegion(2545): Finished memstore flush of ~4.02 KB/4112, currentsize=0 B/0 for region hbase:meta,,1.1588230740 in 455ms, sequenceid=15, compaction requested=false 2016-08-18 15:27:16,950 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed info 2016-08-18 15:27:16,950 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed table 2016-08-18 15:27:16,951 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/WALs/10.22.9.171,63314,1471559042157.meta/10.22.9.171%2C63314%2C1471559042157.meta.regiongroup-0.1471559042405 2016-08-18 15:27:16,955 DEBUG [RS_CLOSE_META-10.22.9.171:63314-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/data/hbase/meta/1588230740/recovered.edits/18.seqid to file, newSeqId=18, maxSeqId=3 2016-08-18 15:27:16,956 DEBUG [RS_CLOSE_META-10.22.9.171:63314-0] coprocessor.CoprocessorHost(271): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2016-08-18 15:27:16,957 INFO [RS_CLOSE_META-10.22.9.171:63314-0] regionserver.HRegion(1552): Closed hbase:meta,,1.1588230740 2016-08-18 15:27:16,957 DEBUG [RS_CLOSE_META-10.22.9.171:63314-0] handler.CloseRegionHandler(122): Closed hbase:meta,,1.1588230740 2016-08-18 15:27:17,068 DEBUG [RS:0;10.22.9.171:63319] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/oldWALs 2016-08-18 15:27:17,068 INFO [RS:0;10.22.9.171:63319] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C63319%2C1471559042214.regiongroup-1:(num 1471559045756) 2016-08-18 15:27:17,068 DEBUG [RS:0;10.22.9.171:63319] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:27:17,069 INFO [RS:0;10.22.9.171:63319] regionserver.Leases(146): RS:0;10.22.9.171:63319 closing leases 2016-08-18 15:27:17,069 INFO [RS:0;10.22.9.171:63319] regionserver.Leases(149): RS:0;10.22.9.171:63319 closed leases 2016-08-18 15:27:17,069 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63314] ipc.RpcServer$Listener(912): RpcServer.listener,port=63314: DISCONNECTING client 10.22.9.171:63325 because read count=-1. Number of active connections: 1 2016-08-18 15:27:17,069 DEBUG [AsyncRpcChannel-pool2-t4] ipc.AsyncRpcChannel$8(566): IPC Client (1121880717) to /10.22.9.171:63314 from tyu.hfs.1: closed 2016-08-18 15:27:17,069 INFO [RS:0;10.22.9.171:63319] hbase.ChoreService(323): Chore service for: 10.22.9.171,63319,1471559042214 had [[ScheduledChore: Name: 10.22.9.171,63319,1471559042214-MemstoreFlusherChore Period: 1000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.22.9.171,63319,1471559042214 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS]] on shutdown 2016-08-18 15:27:17,069 INFO [RS:0;10.22.9.171:63319] regionserver.CompactSplitThread(403): Waiting for Split Thread to finish... 2016-08-18 15:27:17,069 INFO [RS:0;10.22.9.171:63319] regionserver.CompactSplitThread(403): Waiting for Merge Thread to finish... 2016-08-18 15:27:17,069 INFO [RS:0;10.22.9.171:63319] regionserver.CompactSplitThread(403): Waiting for Large Compaction Thread to finish... 2016-08-18 15:27:17,069 INFO [RS:0;10.22.9.171:63319] regionserver.CompactSplitThread(403): Waiting for Small Compaction Thread to finish... 2016-08-18 15:27:17,073 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:63319-0x1569fc0e7310007, quorum=localhost:61765, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/replication/rs/10.22.9.171,63319,1471559042214 2016-08-18 15:27:17,077 INFO [RS:0;10.22.9.171:63319] ipc.RpcServer(2336): Stopping server on 63319 2016-08-18 15:27:17,077 INFO [RpcServer.listener,port=63319] ipc.RpcServer$Listener(816): RpcServer.listener,port=63319: stopping 2016-08-18 15:27:17,078 INFO [RpcServer.responder] ipc.RpcServer$Responder(1059): RpcServer.responder: stopped 2016-08-18 15:27:17,078 INFO [RpcServer.responder] ipc.RpcServer$Responder(962): RpcServer.responder: stopping 2016-08-18 15:27:17,079 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:63319-0x1569fc0e7310007, quorum=localhost:61765, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/10.22.9.171,63319,1471559042214 2016-08-18 15:27:17,079 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63314-0x1569fc0e7310006, quorum=localhost:61765, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/10.22.9.171,63319,1471559042214 2016-08-18 15:27:17,079 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:63319-0x1569fc0e7310007, quorum=localhost:61765, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2016-08-18 15:27:17,079 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.22.9.171,63319,1471559042214] 2016-08-18 15:27:17,080 INFO [main-EventThread] master.ServerManager(609): Cluster shutdown set; 10.22.9.171,63319,1471559042214 expired; onlineServers=1 2016-08-18 15:27:17,080 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63314-0x1569fc0e7310006, quorum=localhost:61765, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2016-08-18 15:27:17,080 INFO [RS:0;10.22.9.171:63319] regionserver.HRegionServer(1135): stopping server 10.22.9.171,63319,1471559042214; zookeeper connection closed. 2016-08-18 15:27:17,080 INFO [RS:0;10.22.9.171:63319] regionserver.HRegionServer(1138): RS:0;10.22.9.171:63319 exiting 2016-08-18 15:27:17,080 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4bc15184] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(190): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4bc15184 2016-08-18 15:27:17,081 INFO [main] util.JVMClusterUtil(317): Shutdown of 1 master(s) and 1 regionserver(s) complete 2016-08-18 15:27:17,098 INFO [M:0;10.22.9.171:63314] regionserver.HRegionServer(1091): stopping server 10.22.9.171,63314,1471559042157; all regions closed. 2016-08-18 15:27:17,098 DEBUG [M:0;10.22.9.171:63314] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/WALs/10.22.9.171,63314,1471559042157.meta 2016-08-18 15:27:17,098 DEBUG [M:0;10.22.9.171:63314] wal.FSHLog(1090): closing hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/WALs/10.22.9.171,63314,1471559042157.meta/10.22.9.171%2C63314%2C1471559042157.meta.regiongroup-0.1471559042405 2016-08-18 15:27:17,104 INFO [IPC Server handler 1 on 63303] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63306 is added to blk_1073741829_1005{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f7c928e6-7708-4ce3-9a57-779614366990:NORMAL:127.0.0.1:63306|RBW]]} size 83 2016-08-18 15:27:17,107 DEBUG [M:0;10.22.9.171:63314] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/oldWALs 2016-08-18 15:27:17,107 INFO [M:0;10.22.9.171:63314] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C63314%2C1471559042157.meta.regiongroup-0:(num 1471559042405) 2016-08-18 15:27:17,107 DEBUG [M:0;10.22.9.171:63314] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/WALs/10.22.9.171,63314,1471559042157 2016-08-18 15:27:17,107 DEBUG [M:0;10.22.9.171:63314] wal.FSHLog(1090): closing hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/WALs/10.22.9.171,63314,1471559042157/10.22.9.171%2C63314%2C1471559042157.regiongroup-0.1471559043123 2016-08-18 15:27:17,111 INFO [IPC Server handler 4 on 63303] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63306 is added to blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-f7c928e6-7708-4ce3-9a57-779614366990:NORMAL:127.0.0.1:63306|RBW]]} size 83 2016-08-18 15:27:17,113 DEBUG [M:0;10.22.9.171:63314] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/oldWALs 2016-08-18 15:27:17,113 INFO [M:0;10.22.9.171:63314] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C63314%2C1471559042157.regiongroup-0:(num 1471559043123) 2016-08-18 15:27:17,113 DEBUG [M:0;10.22.9.171:63314] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/WALs/10.22.9.171,63314,1471559042157 2016-08-18 15:27:17,113 DEBUG [M:0;10.22.9.171:63314] wal.FSHLog(1090): closing hdfs://localhost:63303/user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/WALs/10.22.9.171,63314,1471559042157/10.22.9.171%2C63314%2C1471559042157.regiongroup-1.1471559043405 2016-08-18 15:27:17,116 INFO [IPC Server handler 1 on 63303] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63306 is added to blk_1073741834_1010{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-8d2e8b28-58cc-488d-aeb0-84469d8ca908:NORMAL:127.0.0.1:63306|RBW]]} size 83 2016-08-18 15:27:17,118 DEBUG [M:0;10.22.9.171:63314] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/1f5de198-d804-4227-adce-926a53f8f786/oldWALs 2016-08-18 15:27:17,118 INFO [M:0;10.22.9.171:63314] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C63314%2C1471559042157.regiongroup-1:(num 1471559043405) 2016-08-18 15:27:17,118 DEBUG [M:0;10.22.9.171:63314] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:27:17,118 INFO [M:0;10.22.9.171:63314] regionserver.Leases(146): M:0;10.22.9.171:63314 closing leases 2016-08-18 15:27:17,118 INFO [M:0;10.22.9.171:63314] regionserver.Leases(149): M:0;10.22.9.171:63314 closed leases 2016-08-18 15:27:17,119 INFO [M:0;10.22.9.171:63314] hbase.ChoreService(323): Chore service for: 10.22.9.171,63314,1471559042157 had [[ScheduledChore: Name: HFileCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,63314,1471559042157-MemstoreFlusherChore Period: 1000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,63314,1471559042157-BalancerChore Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: CatalogJanitor-10.22.9.171:63314 Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,63314,1471559042157-ExpiredMobFileCleanerChore Period: 86400 Unit: SECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,63314,1471559042157-MobCompactionChore Period: 604800 Unit: SECONDS], [ScheduledChore: Name: LogsCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,63314,1471559042157-RegionNormalizerChore Period: 1800000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.22.9.171,63314,1471559042157 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,63314,1471559042157-ClusterStatusChore Period: 60000 Unit: MILLISECONDS]] on shutdown 2016-08-18 15:27:17,120 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63314-0x1569fc0e7310006, quorum=localhost:61765, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/replication/rs/10.22.9.171,63314,1471559042157 2016-08-18 15:27:17,121 INFO [M:0;10.22.9.171:63314] master.MasterMobCompactionThread(175): Waiting for Mob Compaction Thread to finish... 2016-08-18 15:27:17,121 INFO [M:0;10.22.9.171:63314] master.MasterMobCompactionThread(175): Waiting for Region Server Mob Compaction Thread to finish... 2016-08-18 15:27:17,121 DEBUG [M:0;10.22.9.171:63314] master.HMaster(1127): Stopping service threads 2016-08-18 15:27:17,122 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63314-0x1569fc0e7310006, quorum=localhost:61765, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/master 2016-08-18 15:27:17,122 INFO [M:0;10.22.9.171:63314] hbase.ChoreService(323): Chore service for: 10.22.9.171,63314,1471559042157_splitLogManager_ had [[ScheduledChore: Name: SplitLogManager Timeout Monitor Period: 1000 Unit: MILLISECONDS]] on shutdown 2016-08-18 15:27:17,122 INFO [M:0;10.22.9.171:63314] master.LogRollMasterProcedureManager(55): stop: server shutting down. 2016-08-18 15:27:17,122 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:63314-0x1569fc0e7310006, quorum=localhost:61765, baseZNode=/2 Set watcher on znode that does not yet exist, /2/master 2016-08-18 15:27:17,122 INFO [M:0;10.22.9.171:63314] flush.MasterFlushTableProcedureManager(78): stop: server shutting down. 2016-08-18 15:27:17,122 INFO [M:0;10.22.9.171:63314] ipc.RpcServer(2336): Stopping server on 63314 2016-08-18 15:27:17,122 INFO [RpcServer.listener,port=63314] ipc.RpcServer$Listener(816): RpcServer.listener,port=63314: stopping 2016-08-18 15:27:17,123 INFO [RpcServer.responder] ipc.RpcServer$Responder(1059): RpcServer.responder: stopped 2016-08-18 15:27:17,123 INFO [RpcServer.responder] ipc.RpcServer$Responder(962): RpcServer.responder: stopping 2016-08-18 15:27:17,124 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63314-0x1569fc0e7310006, quorum=localhost:61765, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/10.22.9.171,63314,1471559042157 2016-08-18 15:27:17,124 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.22.9.171,63314,1471559042157] 2016-08-18 15:27:17,125 INFO [M:0;10.22.9.171:63314] regionserver.HRegionServer(1135): stopping server 10.22.9.171,63314,1471559042157; zookeeper connection closed. 2016-08-18 15:27:17,125 INFO [M:0;10.22.9.171:63314] regionserver.HRegionServer(1138): M:0;10.22.9.171:63314 exiting 2016-08-18 15:27:17,125 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-08-18 15:27:17,133 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-08-18 15:27:17,238 WARN [DataNode: [[[DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/dfscluster_c8b3414b-6456-404b-9891-e27f5f1bcef4/dfs/data/data1/, [DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/dfscluster_c8b3414b-6456-404b-9891-e27f5f1bcef4/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:63303] datanode.BPServiceActor(704): BPOfferService for Block pool BP-517819848-10.22.9.171-1471559041743 (Datanode Uuid a62500ab-58e5-4fb2-b9ad-7d6c009c35e8) service to localhost/127.0.0.1:63303 interrupted 2016-08-18 15:27:17,238 WARN [DataNode: [[[DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/dfscluster_c8b3414b-6456-404b-9891-e27f5f1bcef4/dfs/data/data1/, [DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/bec90144-e545-4d6f-9187-619403fc2c48/dfscluster_c8b3414b-6456-404b-9891-e27f5f1bcef4/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:63303] datanode.BPServiceActor(835): Ending block pool service for: Block pool BP-517819848-10.22.9.171-1471559041743 (Datanode Uuid a62500ab-58e5-4fb2-b9ad-7d6c009c35e8) service to localhost/127.0.0.1:63303 2016-08-18 15:27:17,295 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-08-18 15:27:17,423 INFO [main] hbase.HBaseTestingUtility(1155): Minicluster is down 2016-08-18 15:27:17,423 INFO [main] hbase.HBaseTestingUtility(1142): Shutting down minicluster 2016-08-18 15:27:17,423 INFO [main] client.ConnectionImplementation(1811): Closing master protocol: MasterService 2016-08-18 15:27:17,423 INFO [main] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310005 2016-08-18 15:27:17,426 DEBUG [main] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:27:17,427 DEBUG [main] util.JVMClusterUtil(241): Shutting down HBase Cluster 2016-08-18 15:27:17,427 DEBUG [AsyncRpcChannel-pool2-t16] ipc.AsyncRpcChannel$8(566): IPC Client (-1528815595) to /10.22.9.171:63282 from tyu: closed 2016-08-18 15:27:17,427 DEBUG [main] coprocessor.CoprocessorHost(271): Stop coprocessor org.apache.hadoop.hbase.backup.master.BackupController 2016-08-18 15:27:17,427 INFO [main] regionserver.HRegionServer(1918): STOPPED: Cluster shutdown requested 2016-08-18 15:27:17,427 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Listener(912): RpcServer.listener,port=63282: DISCONNECTING client 10.22.9.171:63439 because read count=-1. Number of active connections: 6 2016-08-18 15:27:17,427 DEBUG [RpcServer.reader=0,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63363 because read count=-1. Number of active connections: 9 2016-08-18 15:27:17,427 DEBUG [AsyncRpcChannel-pool2-t7] ipc.AsyncRpcChannel$8(566): IPC Client (-10313466) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:27:17,427 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63301 because read count=-1. Number of active connections: 9 2016-08-18 15:27:17,427 DEBUG [AsyncRpcChannel-pool2-t2] ipc.AsyncRpcChannel$8(566): IPC Client (-711543016) to /10.22.9.171:63280 from tyu: closed 2016-08-18 15:27:17,427 INFO [M:0;10.22.9.171:63280] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-08-18 15:27:17,428 INFO [M:0;10.22.9.171:63280] regionserver.HeapMemoryManager(202): Stoping HeapMemoryTuner chore. 2016-08-18 15:27:17,428 INFO [SplitLogWorker-10.22.9.171:63280] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-08-18 15:27:17,428 INFO [M:0;10.22.9.171:63280] procedure2.ProcedureExecutor(532): Stopping the procedure executor 2016-08-18 15:27:17,428 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-08-18 15:27:17,428 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:63282-0x1569fc0e7310001, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-08-18 15:27:17,429 INFO [M:0;10.22.9.171:63280] wal.WALProcedureStore(232): Stopping the WAL Procedure Store 2016-08-18 15:27:17,428 INFO [main] regionserver.HRegionServer(1918): STOPPED: Shutdown requested 2016-08-18 15:27:17,428 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.1 exiting 2016-08-18 15:27:17,428 INFO [SplitLogWorker-10.22.9.171:63280] regionserver.SplitLogWorker(155): SplitLogWorker 10.22.9.171,63280,1471559038246 exiting 2016-08-18 15:27:17,428 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.0 exiting 2016-08-18 15:27:17,429 DEBUG [main-EventThread] zookeeper.ZKUtil(367): regionserver:63282-0x1569fc0e7310001, quorum=localhost:61765, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-08-18 15:27:17,429 INFO [RS:0;10.22.9.171:63282] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-08-18 15:27:17,429 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-08-18 15:27:17,429 INFO [RS:0;10.22.9.171:63282] regionserver.HeapMemoryManager(202): Stoping HeapMemoryTuner chore. 2016-08-18 15:27:17,429 INFO [SplitLogWorker-10.22.9.171:63282] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-08-18 15:27:17,430 INFO [RS:0;10.22.9.171:63282] regionserver.LogRollRegionServerProcedureManager(96): Stopping RegionServerBackupManager gracefully. 2016-08-18 15:27:17,430 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.0 exiting 2016-08-18 15:27:17,430 INFO [RS:0;10.22.9.171:63282] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-08-18 15:27:17,430 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(280): MemStoreFlusher.1 exiting 2016-08-18 15:27:17,430 INFO [SplitLogWorker-10.22.9.171:63282] regionserver.SplitLogWorker(155): SplitLogWorker 10.22.9.171,63282,1471559038490 exiting 2016-08-18 15:27:17,430 INFO [RS:0;10.22.9.171:63282] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-08-18 15:27:17,431 INFO [RS:0;10.22.9.171:63282] regionserver.HRegionServer(1063): stopping server 10.22.9.171,63282,1471559038490 2016-08-18 15:27:17,431 DEBUG [RS:0;10.22.9.171:63282] zookeeper.MetaTableLocator(612): Stopping MetaTableLocator 2016-08-18 15:27:17,431 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-0] handler.CloseRegionHandler(90): Processing close of hbase:backup,,1471559041669.f8c39842b4cd271b3d073c7bb2738adb. 2016-08-18 15:27:17,431 INFO [RS:0;10.22.9.171:63282] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310002 2016-08-18 15:27:17,431 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-2] handler.CloseRegionHandler(90): Processing close of ns1:test-1471559060953,,1471559063908.c61c3bf2f83c0b95289129ff052b32c3. 2016-08-18 15:27:17,431 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] handler.CloseRegionHandler(90): Processing close of ns3:test-14715590609532,,1471559066772.e196ea4c6ebf18d7f346b1209ee442d8. 2016-08-18 15:27:17,432 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.HRegion(1419): Closing ns1:test-1471559060953,,1471559063908.c61c3bf2f83c0b95289129ff052b32c3.: disabling compactions & flushes 2016-08-18 15:27:17,431 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-0] regionserver.HRegion(1419): Closing hbase:backup,,1471559041669.f8c39842b4cd271b3d073c7bb2738adb.: disabling compactions & flushes 2016-08-18 15:27:17,432 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.HRegion(1446): Updates disabled for region ns1:test-1471559060953,,1471559063908.c61c3bf2f83c0b95289129ff052b32c3. 2016-08-18 15:27:17,432 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1419): Closing ns3:test-14715590609532,,1471559066772.e196ea4c6ebf18d7f346b1209ee442d8.: disabling compactions & flushes 2016-08-18 15:27:17,432 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-0] regionserver.HRegion(1446): Updates disabled for region hbase:backup,,1471559041669.f8c39842b4cd271b3d073c7bb2738adb. 2016-08-18 15:27:17,432 DEBUG [RS:0;10.22.9.171:63282] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:27:17,432 INFO [RS_CLOSE_REGION-10.22.9.171:63282-0] regionserver.HRegion(2345): Flushing 2/2 column families, memstore=17.23 KB 2016-08-18 15:27:17,432 INFO [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=16.24 KB 2016-08-18 15:27:17,432 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1446): Updates disabled for region ns3:test-14715590609532,,1471559066772.e196ea4c6ebf18d7f346b1209ee442d8. 2016-08-18 15:27:17,432 DEBUG [AsyncRpcChannel-pool2-t15] ipc.AsyncRpcChannel$8(566): IPC Client (-1840643661) to /10.22.9.171:63280 from tyu.hfs.0: closed 2016-08-18 15:27:17,432 DEBUG [RpcServer.reader=2,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63394 because read count=-1. Number of active connections: 7 2016-08-18 15:27:17,432 INFO [RS:0;10.22.9.171:63282] regionserver.HRegionServer(1292): Waiting on 9 regions to close 2016-08-18 15:27:17,432 DEBUG [RS:0;10.22.9.171:63282] regionserver.HRegionServer(1296): {f8c39842b4cd271b3d073c7bb2738adb=hbase:backup,,1471559041669.f8c39842b4cd271b3d073c7bb2738adb., e196ea4c6ebf18d7f346b1209ee442d8=ns3:test-14715590609532,,1471559066772.e196ea4c6ebf18d7f346b1209ee442d8., c61c3bf2f83c0b95289129ff052b32c3=ns1:test-1471559060953,,1471559063908.c61c3bf2f83c0b95289129ff052b32c3., a4882d4755c241a0547202f501525250=ns4:test-14715590609533,,1471559068016.a4882d4755c241a0547202f501525250., 854a47f76da7ac7120b78cba57ef767c=ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c., 64e80db997c7530f46efbcdcb1606064=ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064., bf3c6d412d1b40a1b33f3f2c30bb496a=ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a., 5f33ce9d76378cebce2b8fb0a44fa79e=ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e., eafb138c6dd37e9e90df990bbe563d21=ns2:test-14715590609531,,1471559065354.eafb138c6dd37e9e90df990bbe563d21.} 2016-08-18 15:27:17,432 INFO [StoreCloserThread-ns3:test-14715590609532,,1471559066772.e196ea4c6ebf18d7f346b1209ee442d8.-1] regionserver.HStore(839): Closed f 2016-08-18 15:27:17,432 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559098118 2016-08-18 15:27:17,433 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 2016-08-18 15:27:17,433 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559098547 2016-08-18 15:27:17,440 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns3/test-14715590609532/e196ea4c6ebf18d7f346b1209ee442d8/recovered.edits/5.seqid to file, newSeqId=5, maxSeqId=2 2016-08-18 15:27:17,441 INFO [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1552): Closed ns3:test-14715590609532,,1471559066772.e196ea4c6ebf18d7f346b1209ee442d8. 2016-08-18 15:27:17,441 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] handler.CloseRegionHandler(122): Closed ns3:test-14715590609532,,1471559066772.e196ea4c6ebf18d7f346b1209ee442d8. 2016-08-18 15:27:17,441 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] handler.CloseRegionHandler(90): Processing close of ns4:test-14715590609533,,1471559068016.a4882d4755c241a0547202f501525250. 2016-08-18 15:27:17,441 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1419): Closing ns4:test-14715590609533,,1471559068016.a4882d4755c241a0547202f501525250.: disabling compactions & flushes 2016-08-18 15:27:17,441 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1446): Updates disabled for region ns4:test-14715590609533,,1471559068016.a4882d4755c241a0547202f501525250. 2016-08-18 15:27:17,442 INFO [StoreCloserThread-ns4:test-14715590609533,,1471559068016.a4882d4755c241a0547202f501525250.-1] regionserver.HStore(839): Closed f 2016-08-18 15:27:17,442 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 2016-08-18 15:27:17,446 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073742001_1177{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:27:17,446 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns4/test-14715590609533/a4882d4755c241a0547202f501525250/recovered.edits/5.seqid to file, newSeqId=5, maxSeqId=2 2016-08-18 15:27:17,446 INFO [RS_CLOSE_REGION-10.22.9.171:63282-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=21, memsize=13.5 K, hasBloomFilter=true, into tmp file hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/backup/f8c39842b4cd271b3d073c7bb2738adb/.tmp/efcd1e785377408095dc187067b57a3d 2016-08-18 15:27:17,447 INFO [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1552): Closed ns4:test-14715590609533,,1471559068016.a4882d4755c241a0547202f501525250. 2016-08-18 15:27:17,447 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] handler.CloseRegionHandler(122): Closed ns4:test-14715590609533,,1471559068016.a4882d4755c241a0547202f501525250. 2016-08-18 15:27:17,447 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] handler.CloseRegionHandler(90): Processing close of ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:27:17,448 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1419): Closing ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c.: disabling compactions & flushes 2016-08-18 15:27:17,448 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1446): Updates disabled for region ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:27:17,448 INFO [IPC Server handler 9 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073742002_1178{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:27:17,449 INFO [StoreCloserThread-ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c.-1] regionserver.HStore(839): Closed f 2016-08-18 15:27:17,449 INFO [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=205, memsize=16.2 K, hasBloomFilter=true, into tmp file hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/test-1471559060953/c61c3bf2f83c0b95289129ff052b32c3/.tmp/1ce325c0ff4f4137b1a4bf6a9ac99691 2016-08-18 15:27:17,449 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559098118 2016-08-18 15:27:17,453 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/table1_restore/854a47f76da7ac7120b78cba57ef767c/recovered.edits/8.seqid to file, newSeqId=8, maxSeqId=2 2016-08-18 15:27:17,454 INFO [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1552): Closed ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:27:17,454 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] handler.CloseRegionHandler(122): Closed ns1:table1_restore,,1471559106284.854a47f76da7ac7120b78cba57ef767c. 2016-08-18 15:27:17,454 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] handler.CloseRegionHandler(90): Processing close of ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:27:17,454 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1419): Closing ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064.: disabling compactions & flushes 2016-08-18 15:27:17,454 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1446): Updates disabled for region ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:27:17,455 INFO [StoreCloserThread-ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064.-1] regionserver.HStore(839): Closed f 2016-08-18 15:27:17,455 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559098547 2016-08-18 15:27:17,456 INFO [IPC Server handler 3 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 465 2016-08-18 15:27:17,456 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/test-1471559060953/c61c3bf2f83c0b95289129ff052b32c3/.tmp/1ce325c0ff4f4137b1a4bf6a9ac99691 as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/test-1471559060953/c61c3bf2f83c0b95289129ff052b32c3/f/1ce325c0ff4f4137b1a4bf6a9ac99691 2016-08-18 15:27:17,456 INFO [M:0;10.22.9.171:63280] regionserver.LogRollRegionServerProcedureManager(96): Stopping RegionServerBackupManager gracefully. 2016-08-18 15:27:17,456 INFO [M:0;10.22.9.171:63280] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-08-18 15:27:17,457 INFO [M:0;10.22.9.171:63280] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-08-18 15:27:17,457 INFO [M:0;10.22.9.171:63280] regionserver.HRegionServer(1063): stopping server 10.22.9.171,63280,1471559038246 2016-08-18 15:27:17,457 DEBUG [M:0;10.22.9.171:63280] zookeeper.MetaTableLocator(612): Stopping MetaTableLocator 2016-08-18 15:27:17,457 DEBUG [RS_CLOSE_REGION-10.22.9.171:63280-0] handler.CloseRegionHandler(90): Processing close of hbase:namespace,,1471559039556.1934919e607520cdbfbecc5343937a9f. 2016-08-18 15:27:17,457 INFO [M:0;10.22.9.171:63280] client.ConnectionImplementation(1346): Closing zookeeper sessionid=0x1569fc0e7310003 2016-08-18 15:27:17,457 DEBUG [RS_CLOSE_REGION-10.22.9.171:63280-0] regionserver.HRegion(1419): Closing hbase:namespace,,1471559039556.1934919e607520cdbfbecc5343937a9f.: disabling compactions & flushes 2016-08-18 15:27:17,458 DEBUG [RS_CLOSE_REGION-10.22.9.171:63280-0] regionserver.HRegion(1446): Updates disabled for region hbase:namespace,,1471559039556.1934919e607520cdbfbecc5343937a9f. 2016-08-18 15:27:17,458 INFO [RS_CLOSE_REGION-10.22.9.171:63280-0] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=1016 B 2016-08-18 15:27:17,458 DEBUG [M:0;10.22.9.171:63280] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:27:17,460 INFO [M:0;10.22.9.171:63280] regionserver.CompactSplitThread(403): Waiting for Split Thread to finish... 2016-08-18 15:27:17,460 DEBUG [AsyncRpcChannel-pool2-t3] ipc.AsyncRpcChannel$8(566): IPC Client (-1905554337) to /10.22.9.171:63282 from tyu: closed 2016-08-18 15:27:17,460 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559098118 2016-08-18 15:27:17,461 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Listener(912): RpcServer.listener,port=63282: DISCONNECTING client 10.22.9.171:63313 because read count=-1. Number of active connections: 5 2016-08-18 15:27:17,461 DEBUG [AsyncRpcChannel-pool2-t14] ipc.AsyncRpcChannel$8(566): IPC Client (1941365133) to /10.22.9.171:63282 from tyu: closed 2016-08-18 15:27:17,460 INFO [M:0;10.22.9.171:63280] regionserver.CompactSplitThread(403): Waiting for Merge Thread to finish... 2016-08-18 15:27:17,461 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63282] ipc.RpcServer$Listener(912): RpcServer.listener,port=63282: DISCONNECTING client 10.22.9.171:63390 because read count=-1. Number of active connections: 4 2016-08-18 15:27:17,461 INFO [M:0;10.22.9.171:63280] regionserver.CompactSplitThread(403): Waiting for Large Compaction Thread to finish... 2016-08-18 15:27:17,461 INFO [M:0;10.22.9.171:63280] regionserver.CompactSplitThread(403): Waiting for Small Compaction Thread to finish... 2016-08-18 15:27:17,462 INFO [M:0;10.22.9.171:63280] regionserver.HRegionServer(1292): Waiting on 2 regions to close 2016-08-18 15:27:17,462 DEBUG [M:0;10.22.9.171:63280] regionserver.HRegionServer(1296): {1934919e607520cdbfbecc5343937a9f=hbase:namespace,,1471559039556.1934919e607520cdbfbecc5343937a9f., 1588230740=hbase:meta,,1.1588230740} 2016-08-18 15:27:17,462 DEBUG [RS_CLOSE_META-10.22.9.171:63280-0] handler.CloseRegionHandler(90): Processing close of hbase:meta,,1.1588230740 2016-08-18 15:27:17,463 DEBUG [RS_CLOSE_META-10.22.9.171:63280-0] regionserver.HRegion(1419): Closing hbase:meta,,1.1588230740: disabling compactions & flushes 2016-08-18 15:27:17,463 DEBUG [RS_CLOSE_META-10.22.9.171:63280-0] regionserver.HRegion(1446): Updates disabled for region hbase:meta,,1.1588230740 2016-08-18 15:27:17,463 INFO [RS_CLOSE_META-10.22.9.171:63280-0] regionserver.HRegion(2345): Flushing 2/2 column families, memstore=28.55 KB 2016-08-18 15:27:17,463 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:27:17,464 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns3/table3_restore/64e80db997c7530f46efbcdcb1606064/recovered.edits/4.seqid to file, newSeqId=4, maxSeqId=2 2016-08-18 15:27:17,465 INFO [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1552): Closed ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:27:17,465 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] handler.CloseRegionHandler(122): Closed ns3:table3_restore,,1471559109030.64e80db997c7530f46efbcdcb1606064. 2016-08-18 15:27:17,465 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] handler.CloseRegionHandler(90): Processing close of ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:27:17,465 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1419): Closing ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a.: disabling compactions & flushes 2016-08-18 15:27:17,465 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1446): Updates disabled for region ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:27:17,467 INFO [IPC Server handler 9 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073742003_1179{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:27:17,467 INFO [StoreCloserThread-ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a.-1] regionserver.HStore(839): Closed f 2016-08-18 15:27:17,467 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559098969 2016-08-18 15:27:17,468 INFO [RS_CLOSE_REGION-10.22.9.171:63282-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=21, memsize=3.7 K, hasBloomFilter=true, into tmp file hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/backup/f8c39842b4cd271b3d073c7bb2738adb/.tmp/fa7f41111193474f996812f193f23284 2016-08-18 15:27:17,469 INFO [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.HStore(934): Added hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/test-1471559060953/c61c3bf2f83c0b95289129ff052b32c3/f/1ce325c0ff4f4137b1a4bf6a9ac99691, entries=99, sequenceid=205, filesize=8.5 K 2016-08-18 15:27:17,469 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559098118 2016-08-18 15:27:17,470 INFO [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.HRegion(2545): Finished memstore flush of ~16.24 KB/16632, currentsize=0 B/0 for region ns1:test-1471559060953,,1471559063908.c61c3bf2f83c0b95289129ff052b32c3. in 38ms, sequenceid=205, compaction requested=false 2016-08-18 15:27:17,471 INFO [StoreCloserThread-ns1:test-1471559060953,,1471559063908.c61c3bf2f83c0b95289129ff052b32c3.-1] regionserver.HStore(839): Closed f 2016-08-18 15:27:17,472 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559098118 2016-08-18 15:27:17,473 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/table2_restore/bf3c6d412d1b40a1b33f3f2c30bb496a/recovered.edits/8.seqid to file, newSeqId=8, maxSeqId=2 2016-08-18 15:27:17,474 INFO [IPC Server handler 4 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073742004_1180{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:27:17,475 INFO [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1552): Closed ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:27:17,475 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] handler.CloseRegionHandler(122): Closed ns2:table2_restore,,1471559107675.bf3c6d412d1b40a1b33f3f2c30bb496a. 2016-08-18 15:27:17,475 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] handler.CloseRegionHandler(90): Processing close of ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e. 2016-08-18 15:27:17,475 INFO [RS_CLOSE_REGION-10.22.9.171:63280-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=10, memsize=1016, hasBloomFilter=true, into tmp file hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/namespace/1934919e607520cdbfbecc5343937a9f/.tmp/262e48e4e5094d2692f48a12f21ef8e9 2016-08-18 15:27:17,475 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1419): Closing ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e.: disabling compactions & flushes 2016-08-18 15:27:17,475 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1446): Updates disabled for region ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e. 2016-08-18 15:27:17,476 INFO [StoreCloserThread-ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e.-1] regionserver.HStore(839): Closed f 2016-08-18 15:27:17,476 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 2016-08-18 15:27:17,476 INFO [regionserver//10.22.9.171:0.leaseChecker] regionserver.Leases(146): regionserver//10.22.9.171:0.leaseChecker closing leases 2016-08-18 15:27:17,476 INFO [regionserver//10.22.9.171:0.leaseChecker] regionserver.Leases(149): regionserver//10.22.9.171:0.leaseChecker closed leases 2016-08-18 15:27:17,477 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/backup/f8c39842b4cd271b3d073c7bb2738adb/.tmp/efcd1e785377408095dc187067b57a3d as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/backup/f8c39842b4cd271b3d073c7bb2738adb/meta/efcd1e785377408095dc187067b57a3d 2016-08-18 15:27:17,477 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns1/test-1471559060953/c61c3bf2f83c0b95289129ff052b32c3/recovered.edits/208.seqid to file, newSeqId=208, maxSeqId=2 2016-08-18 15:27:17,479 INFO [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.HRegion(1552): Closed ns1:test-1471559060953,,1471559063908.c61c3bf2f83c0b95289129ff052b32c3. 2016-08-18 15:27:17,479 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-2] handler.CloseRegionHandler(122): Closed ns1:test-1471559060953,,1471559063908.c61c3bf2f83c0b95289129ff052b32c3. 2016-08-18 15:27:17,479 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-2] handler.CloseRegionHandler(90): Processing close of ns2:test-14715590609531,,1471559065354.eafb138c6dd37e9e90df990bbe563d21. 2016-08-18 15:27:17,479 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.HRegion(1419): Closing ns2:test-14715590609531,,1471559065354.eafb138c6dd37e9e90df990bbe563d21.: disabling compactions & flushes 2016-08-18 15:27:17,479 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.HRegion(1446): Updates disabled for region ns2:test-14715590609531,,1471559065354.eafb138c6dd37e9e90df990bbe563d21. 2016-08-18 15:27:17,479 INFO [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.HRegion(2345): Flushing 1/1 column families, memstore=840 B 2016-08-18 15:27:17,479 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559098969 2016-08-18 15:27:17,481 INFO [IPC Server handler 7 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073742005_1181{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|FINALIZED]]} size 0 2016-08-18 15:27:17,482 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns4/table4_restore/5f33ce9d76378cebce2b8fb0a44fa79e/recovered.edits/4.seqid to file, newSeqId=4, maxSeqId=2 2016-08-18 15:27:17,482 INFO [RS_CLOSE_META-10.22.9.171:63280-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=77, memsize=24.3 K, hasBloomFilter=false, into tmp file hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/meta/1588230740/.tmp/b6e4eb7da8b04d9b879dec379b7e774f 2016-08-18 15:27:17,483 INFO [RS_CLOSE_REGION-10.22.9.171:63282-1] regionserver.HRegion(1552): Closed ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e. 2016-08-18 15:27:17,483 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-1] handler.CloseRegionHandler(122): Closed ns4:table4_restore,,1471559111289.5f33ce9d76378cebce2b8fb0a44fa79e. 2016-08-18 15:27:17,485 DEBUG [RS_CLOSE_REGION-10.22.9.171:63280-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/namespace/1934919e607520cdbfbecc5343937a9f/.tmp/262e48e4e5094d2692f48a12f21ef8e9 as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/namespace/1934919e607520cdbfbecc5343937a9f/info/262e48e4e5094d2692f48a12f21ef8e9 2016-08-18 15:27:17,486 INFO [RS_CLOSE_REGION-10.22.9.171:63282-0] regionserver.HStore(934): Added hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/backup/f8c39842b4cd271b3d073c7bb2738adb/meta/efcd1e785377408095dc187067b57a3d, entries=41, sequenceid=21, filesize=11.3 K 2016-08-18 15:27:17,487 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/backup/f8c39842b4cd271b3d073c7bb2738adb/.tmp/fa7f41111193474f996812f193f23284 as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/backup/f8c39842b4cd271b3d073c7bb2738adb/session/fa7f41111193474f996812f193f23284 2016-08-18 15:27:17,489 INFO [IPC Server handler 9 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073742006_1182{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:27:17,490 INFO [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=111, memsize=840, hasBloomFilter=true, into tmp file hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/test-14715590609531/eafb138c6dd37e9e90df990bbe563d21/.tmp/c51756487bb343b485e2f91afd170098 2016-08-18 15:27:17,490 INFO [RS_CLOSE_META-10.22.9.171:63280-0] regionserver.StoreFile$Reader(1606): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b6e4eb7da8b04d9b879dec379b7e774f 2016-08-18 15:27:17,493 INFO [RS_CLOSE_REGION-10.22.9.171:63280-0] regionserver.HStore(934): Added hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/namespace/1934919e607520cdbfbecc5343937a9f/info/262e48e4e5094d2692f48a12f21ef8e9, entries=6, sequenceid=10, filesize=4.9 K 2016-08-18 15:27:17,493 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559098118 2016-08-18 15:27:17,494 INFO [RS_CLOSE_REGION-10.22.9.171:63280-0] regionserver.HRegion(2545): Finished memstore flush of ~1016 B/1016, currentsize=0 B/0 for region hbase:namespace,,1471559039556.1934919e607520cdbfbecc5343937a9f. in 36ms, sequenceid=10, compaction requested=false 2016-08-18 15:27:17,495 INFO [RS_CLOSE_REGION-10.22.9.171:63282-0] regionserver.HStore(934): Added hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/backup/f8c39842b4cd271b3d073c7bb2738adb/session/fa7f41111193474f996812f193f23284, entries=2, sequenceid=21, filesize=6.2 K 2016-08-18 15:27:17,495 INFO [StoreCloserThread-hbase:namespace,,1471559039556.1934919e607520cdbfbecc5343937a9f.-1] regionserver.HStore(839): Closed info 2016-08-18 15:27:17,495 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 2016-08-18 15:27:17,496 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559098118 2016-08-18 15:27:17,496 INFO [RS_CLOSE_REGION-10.22.9.171:63282-0] regionserver.HRegion(2545): Finished memstore flush of ~17.23 KB/17640, currentsize=0 B/0 for region hbase:backup,,1471559041669.f8c39842b4cd271b3d073c7bb2738adb. in 64ms, sequenceid=21, compaction requested=false 2016-08-18 15:27:17,498 INFO [master//10.22.9.171:0.leaseChecker] regionserver.Leases(146): master//10.22.9.171:0.leaseChecker closing leases 2016-08-18 15:27:17,500 INFO [StoreCloserThread-hbase:backup,,1471559041669.f8c39842b4cd271b3d073c7bb2738adb.-1] regionserver.HStore(839): Closed meta 2016-08-18 15:27:17,500 INFO [RS_OPEN_META-10.22.9.171:63280-0-MetaLogRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-18 15:27:17,500 INFO [master//10.22.9.171:0.leaseChecker] regionserver.Leases(149): master//10.22.9.171:0.leaseChecker closed leases 2016-08-18 15:27:17,501 INFO [StoreCloserThread-hbase:backup,,1471559041669.f8c39842b4cd271b3d073c7bb2738adb.-1] regionserver.HStore(839): Closed session 2016-08-18 15:27:17,501 DEBUG [sync.2] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 2016-08-18 15:27:17,501 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/test-14715590609531/eafb138c6dd37e9e90df990bbe563d21/.tmp/c51756487bb343b485e2f91afd170098 as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/test-14715590609531/eafb138c6dd37e9e90df990bbe563d21/f/c51756487bb343b485e2f91afd170098 2016-08-18 15:27:17,502 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073742007_1183{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 0 2016-08-18 15:27:17,503 INFO [RS_CLOSE_META-10.22.9.171:63280-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=77, memsize=4.3 K, hasBloomFilter=false, into tmp file hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/meta/1588230740/.tmp/8b934392962f47b2b300ae5153d8db37 2016-08-18 15:27:17,504 DEBUG [RS_CLOSE_REGION-10.22.9.171:63280-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/namespace/1934919e607520cdbfbecc5343937a9f/recovered.edits/13.seqid to file, newSeqId=13, maxSeqId=2 2016-08-18 15:27:17,506 INFO [RS_CLOSE_REGION-10.22.9.171:63280-0] regionserver.HRegion(1552): Closed hbase:namespace,,1471559039556.1934919e607520cdbfbecc5343937a9f. 2016-08-18 15:27:17,506 DEBUG [RS_CLOSE_REGION-10.22.9.171:63280-0] handler.CloseRegionHandler(122): Closed hbase:namespace,,1471559039556.1934919e607520cdbfbecc5343937a9f. 2016-08-18 15:27:17,507 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/backup/f8c39842b4cd271b3d073c7bb2738adb/recovered.edits/24.seqid to file, newSeqId=24, maxSeqId=2 2016-08-18 15:27:17,508 INFO [RS_CLOSE_REGION-10.22.9.171:63282-0] regionserver.HRegion(1552): Closed hbase:backup,,1471559041669.f8c39842b4cd271b3d073c7bb2738adb. 2016-08-18 15:27:17,508 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-0] handler.CloseRegionHandler(122): Closed hbase:backup,,1471559041669.f8c39842b4cd271b3d073c7bb2738adb. 2016-08-18 15:27:17,510 INFO [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.HStore(934): Added hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/test-14715590609531/eafb138c6dd37e9e90df990bbe563d21/f/c51756487bb343b485e2f91afd170098, entries=5, sequenceid=111, filesize=4.9 K 2016-08-18 15:27:17,511 DEBUG [sync.3] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559098969 2016-08-18 15:27:17,511 INFO [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.HRegion(2545): Finished memstore flush of ~840 B/840, currentsize=0 B/0 for region ns2:test-14715590609531,,1471559065354.eafb138c6dd37e9e90df990bbe563d21. in 32ms, sequenceid=111, compaction requested=false 2016-08-18 15:27:17,512 DEBUG [RS_CLOSE_META-10.22.9.171:63280-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/meta/1588230740/.tmp/b6e4eb7da8b04d9b879dec379b7e774f as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/meta/1588230740/info/b6e4eb7da8b04d9b879dec379b7e774f 2016-08-18 15:27:17,512 INFO [StoreCloserThread-ns2:test-14715590609531,,1471559065354.eafb138c6dd37e9e90df990bbe563d21.-1] regionserver.HStore(839): Closed f 2016-08-18 15:27:17,513 DEBUG [sync.4] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559098969 2016-08-18 15:27:17,516 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-2] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/ns2/test-14715590609531/eafb138c6dd37e9e90df990bbe563d21/recovered.edits/114.seqid to file, newSeqId=114, maxSeqId=2 2016-08-18 15:27:17,517 INFO [RS_CLOSE_REGION-10.22.9.171:63282-2] regionserver.HRegion(1552): Closed ns2:test-14715590609531,,1471559065354.eafb138c6dd37e9e90df990bbe563d21. 2016-08-18 15:27:17,518 DEBUG [RS_CLOSE_REGION-10.22.9.171:63282-2] handler.CloseRegionHandler(122): Closed ns2:test-14715590609531,,1471559065354.eafb138c6dd37e9e90df990bbe563d21. 2016-08-18 15:27:17,518 INFO [RS_CLOSE_META-10.22.9.171:63280-0] regionserver.StoreFile$Reader(1606): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b6e4eb7da8b04d9b879dec379b7e774f 2016-08-18 15:27:17,519 INFO [RS_CLOSE_META-10.22.9.171:63280-0] regionserver.HStore(934): Added hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/meta/1588230740/info/b6e4eb7da8b04d9b879dec379b7e774f, entries=100, sequenceid=77, filesize=16.5 K 2016-08-18 15:27:17,520 DEBUG [RS_CLOSE_META-10.22.9.171:63280-0] regionserver.HRegionFileSystem(382): Committing store file hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/meta/1588230740/.tmp/8b934392962f47b2b300ae5153d8db37 as hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/meta/1588230740/table/8b934392962f47b2b300ae5153d8db37 2016-08-18 15:27:17,525 INFO [RS_CLOSE_META-10.22.9.171:63280-0] regionserver.HStore(934): Added hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/meta/1588230740/table/8b934392962f47b2b300ae5153d8db37, entries=24, sequenceid=77, filesize=5.7 K 2016-08-18 15:27:17,525 DEBUG [sync.0] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:27:17,526 INFO [RS_CLOSE_META-10.22.9.171:63280-0] regionserver.HRegion(2545): Finished memstore flush of ~28.55 KB/29232, currentsize=0 B/0 for region hbase:meta,,1.1588230740 in 63ms, sequenceid=77, compaction requested=false 2016-08-18 15:27:17,526 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed info 2016-08-18 15:27:17,527 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(839): Closed table 2016-08-18 15:27:17,527 DEBUG [sync.1] wal.FSHLog$SyncRunner(1277): syncing writer hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:27:17,531 DEBUG [RS_CLOSE_META-10.22.9.171:63280-0] wal.WALSplitter(730): Wrote region seqId=hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/data/hbase/meta/1588230740/recovered.edits/80.seqid to file, newSeqId=80, maxSeqId=3 2016-08-18 15:27:17,531 DEBUG [RS_CLOSE_META-10.22.9.171:63280-0] coprocessor.CoprocessorHost(271): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2016-08-18 15:27:17,532 INFO [RS_CLOSE_META-10.22.9.171:63280-0] regionserver.HRegion(1552): Closed hbase:meta,,1.1588230740 2016-08-18 15:27:17,532 DEBUG [RS_CLOSE_META-10.22.9.171:63280-0] handler.CloseRegionHandler(122): Closed hbase:meta,,1.1588230740 2016-08-18 15:27:17,637 INFO [RS:0;10.22.9.171:63282] regionserver.HRegionServer(1091): stopping server 10.22.9.171,63282,1471559038490; all regions closed. 2016-08-18 15:27:17,637 INFO [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2fc1fd9c] blockmanagement.BlockManager(3482): BLOCK* BlockManager: ask 127.0.0.1:63273 to delete [blk_1073741984_1160, blk_1073741985_1161, blk_1073741986_1162, blk_1073741987_1163, blk_1073741988_1164, blk_1073741989_1165, blk_1073741990_1166, blk_1073741991_1167, blk_1073741992_1168, blk_1073741993_1169, blk_1073741994_1170, blk_1073741995_1171, blk_1073741996_1172, blk_1073741997_1173, blk_1073741998_1174, blk_1073741973_1149, blk_1073741914_1090, blk_1073741946_1122, blk_1073741979_1155, blk_1073741980_1156, blk_1073741917_1093, blk_1073741981_1157, blk_1073741982_1158, blk_1073741983_1159] 2016-08-18 15:27:17,637 DEBUG [RS:0;10.22.9.171:63282] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490 2016-08-18 15:27:17,637 DEBUG [RS:0;10.22.9.171:63282] wal.FSHLog(1090): closing hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-2.1471559098118 2016-08-18 15:27:17,643 INFO [IPC Server handler 5 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741886_1062{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 2749 2016-08-18 15:27:17,666 INFO [M:0;10.22.9.171:63280] regionserver.HRegionServer(1091): stopping server 10.22.9.171,63280,1471559038246; all regions closed. 2016-08-18 15:27:17,666 DEBUG [M:0;10.22.9.171:63280] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta 2016-08-18 15:27:17,666 DEBUG [M:0;10.22.9.171:63280] wal.FSHLog(1090): closing hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246.meta/10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0.1471559039141 2016-08-18 15:27:17,679 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741829_1005{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 83 2016-08-18 15:27:17,682 DEBUG [M:0;10.22.9.171:63280] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs 2016-08-18 15:27:17,682 INFO [M:0;10.22.9.171:63280] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C63280%2C1471559038246.meta.regiongroup-0:(num 1471559039141) 2016-08-18 15:27:17,682 DEBUG [M:0;10.22.9.171:63280] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246 2016-08-18 15:27:17,682 DEBUG [M:0;10.22.9.171:63280] wal.FSHLog(1090): closing hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-0.1471559098548 2016-08-18 15:27:17,686 INFO [IPC Server handler 4 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741888_1064{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 83 2016-08-18 15:27:17,689 DEBUG [M:0;10.22.9.171:63280] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs 2016-08-18 15:27:17,689 INFO [M:0;10.22.9.171:63280] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C63280%2C1471559038246.regiongroup-0:(num 1471559098548) 2016-08-18 15:27:17,689 DEBUG [M:0;10.22.9.171:63280] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246 2016-08-18 15:27:17,690 DEBUG [M:0;10.22.9.171:63280] wal.FSHLog(1090): closing hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63280,1471559038246/10.22.9.171%2C63280%2C1471559038246.regiongroup-1.1471559098118 2016-08-18 15:27:17,692 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741885_1061{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 83 2016-08-18 15:27:17,696 DEBUG [M:0;10.22.9.171:63280] wal.FSHLog(1045): Moved 2 WAL file(s) to /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs 2016-08-18 15:27:17,696 INFO [M:0;10.22.9.171:63280] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C63280%2C1471559038246.regiongroup-1:(num 1471559098118) 2016-08-18 15:27:17,696 DEBUG [M:0;10.22.9.171:63280] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:27:17,696 INFO [M:0;10.22.9.171:63280] regionserver.Leases(146): M:0;10.22.9.171:63280 closing leases 2016-08-18 15:27:17,696 INFO [M:0;10.22.9.171:63280] regionserver.Leases(149): M:0;10.22.9.171:63280 closed leases 2016-08-18 15:27:17,696 INFO [M:0;10.22.9.171:63280] hbase.ChoreService(323): Chore service for: 10.22.9.171,63280,1471559038246 had [[ScheduledChore: Name: HFileCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,63280,1471559038246-BalancerChore Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,63280,1471559038246-RegionNormalizerChore Period: 1800000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,63280,1471559038246-ClusterStatusChore Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,63280,1471559038246-MemstoreFlusherChore Period: 1000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: LogsCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,63280,1471559038246-MobCompactionChore Period: 604800 Unit: SECONDS], [ScheduledChore: Name: CatalogJanitor-10.22.9.171:63280 Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.22.9.171,63280,1471559038246 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.22.9.171,63280,1471559038246-ExpiredMobFileCleanerChore Period: 86400 Unit: SECONDS]] on shutdown 2016-08-18 15:27:17,741 INFO [10.22.9.171,63280,1471559038246_splitLogManager__ChoreService_1] hbase.ScheduledChore(179): Chore: SplitLogManager Timeout Monitor was stopped 2016-08-18 15:27:18,015 INFO [10.22.9.171,63282,1471559038490_ChoreService_1] hbase.ScheduledChore(179): Chore: 10.22.9.171,63282,1471559038490-MemstoreFlusherChore was stopped 2016-08-18 15:27:18,049 DEBUG [RS:0;10.22.9.171:63282] wal.FSHLog(1045): Moved 2 WAL file(s) to /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs 2016-08-18 15:27:18,049 INFO [RS:0;10.22.9.171:63282] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C63282%2C1471559038490.regiongroup-2:(num 1471559098118) 2016-08-18 15:27:18,050 DEBUG [RS:0;10.22.9.171:63282] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490 2016-08-18 15:27:18,050 DEBUG [RS:0;10.22.9.171:63282] wal.FSHLog(1090): closing hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-0.1471559098547 2016-08-18 15:27:18,054 INFO [IPC Server handler 8 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741887_1063{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 1511 2016-08-18 15:27:18,260 INFO [master//10.22.9.171:0.logRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-18 15:27:18,265 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/replication/rs/10.22.9.171,63280,1471559038246 2016-08-18 15:27:18,265 INFO [M:0;10.22.9.171:63280] master.MasterMobCompactionThread(175): Waiting for Mob Compaction Thread to finish... 2016-08-18 15:27:18,266 INFO [M:0;10.22.9.171:63280] master.MasterMobCompactionThread(175): Waiting for Region Server Mob Compaction Thread to finish... 2016-08-18 15:27:18,266 INFO [M:0;10.22.9.171:63280] master.ServerManager(554): Waiting on regionserver(s) to go down 10.22.9.171,63282,1471559038490, 10.22.9.171,63280,1471559038246 2016-08-18 15:27:18,355 INFO [regionserver//10.22.9.171:0.logRoller] regionserver.LogRoller(170): LogRoller exiting. 2016-08-18 15:27:18,459 DEBUG [RS:0;10.22.9.171:63282] wal.FSHLog(1045): Moved 1 WAL file(s) to /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs 2016-08-18 15:27:18,459 INFO [RS:0;10.22.9.171:63282] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C63282%2C1471559038490.regiongroup-0:(num 1471559098547) 2016-08-18 15:27:18,459 DEBUG [RS:0;10.22.9.171:63282] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490 2016-08-18 15:27:18,459 DEBUG [RS:0;10.22.9.171:63282] wal.FSHLog(1090): closing hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-1.1471559098984 2016-08-18 15:27:18,475 INFO [IPC Server handler 2 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741890_1066{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-bc66096c-87cf-4516-82cd-7e3254d55826:NORMAL:127.0.0.1:63273|RBW]]} size 8424 2016-08-18 15:27:18,886 DEBUG [RS:0;10.22.9.171:63282] wal.FSHLog(1045): Moved 3 WAL file(s) to /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs 2016-08-18 15:27:18,886 INFO [RS:0;10.22.9.171:63282] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C63282%2C1471559038490.regiongroup-1:(num 1471559098984) 2016-08-18 15:27:18,886 DEBUG [RS:0;10.22.9.171:63282] wal.FSHLog(1087): Closing WAL writer in /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490 2016-08-18 15:27:18,886 DEBUG [RS:0;10.22.9.171:63282] wal.FSHLog(1090): closing hdfs://localhost:63272/user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/WALs/10.22.9.171,63282,1471559038490/10.22.9.171%2C63282%2C1471559038490.regiongroup-3.1471559098969 2016-08-18 15:27:18,891 INFO [IPC Server handler 6 on 63272] blockmanagement.BlockManager(2621): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:63273 is added to blk_1073741889_1065{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-c658d9fd-3dd7-43e9-91ae-f14caccad212:NORMAL:127.0.0.1:63273|RBW]]} size 2752 2016-08-18 15:27:19,302 DEBUG [RS:0;10.22.9.171:63282] wal.FSHLog(1045): Moved 2 WAL file(s) to /user/tyu/test-data/2479f191-cda8-4ea6-a865-15612aee031a/oldWALs 2016-08-18 15:27:19,302 INFO [RS:0;10.22.9.171:63282] wal.FSHLog(1048): Closed WAL: FSHLog 10.22.9.171%2C63282%2C1471559038490.regiongroup-3:(num 1471559098969) 2016-08-18 15:27:19,302 DEBUG [RS:0;10.22.9.171:63282] ipc.AsyncRpcClient(320): Stopping async HBase RPC client 2016-08-18 15:27:19,302 INFO [M:0;10.22.9.171:63280] master.ServerManager(554): Waiting on regionserver(s) to go down 10.22.9.171,63282,1471559038490, 10.22.9.171,63280,1471559038246 2016-08-18 15:27:19,302 INFO [RS:0;10.22.9.171:63282] regionserver.Leases(146): RS:0;10.22.9.171:63282 closing leases 2016-08-18 15:27:19,302 DEBUG [RpcServer.reader=1,bindAddress=10.22.9.171,port=63280] ipc.RpcServer$Listener(912): RpcServer.listener,port=63280: DISCONNECTING client 10.22.9.171:63290 because read count=-1. Number of active connections: 6 2016-08-18 15:27:19,302 DEBUG [AsyncRpcChannel-pool2-t1] ipc.AsyncRpcChannel$8(566): IPC Client (-1759784487) to /10.22.9.171:63280 from tyu.hfs.0: closed 2016-08-18 15:27:19,302 INFO [RS:0;10.22.9.171:63282] regionserver.Leases(149): RS:0;10.22.9.171:63282 closed leases 2016-08-18 15:27:19,303 INFO [RS:0;10.22.9.171:63282] hbase.ChoreService(323): Chore service for: 10.22.9.171,63282,1471559038490 had [[ScheduledChore: Name: MovedRegionsCleaner for region 10.22.9.171,63282,1471559038490 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS]] on shutdown 2016-08-18 15:27:19,303 INFO [RS:0;10.22.9.171:63282] regionserver.CompactSplitThread(403): Waiting for Split Thread to finish... 2016-08-18 15:27:19,303 INFO [RS:0;10.22.9.171:63282] regionserver.CompactSplitThread(403): Waiting for Merge Thread to finish... 2016-08-18 15:27:19,303 INFO [RS:0;10.22.9.171:63282] regionserver.CompactSplitThread(403): Waiting for Large Compaction Thread to finish... 2016-08-18 15:27:19,303 INFO [RS:0;10.22.9.171:63282] regionserver.CompactSplitThread(403): Waiting for Small Compaction Thread to finish... 2016-08-18 15:27:19,307 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:63282-0x1569fc0e7310001, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/replication/rs/10.22.9.171,63282,1471559038490 2016-08-18 15:27:19,307 INFO [RS:0;10.22.9.171:63282] ipc.RpcServer(2336): Stopping server on 63282 2016-08-18 15:27:19,307 INFO [RpcServer.listener,port=63282] ipc.RpcServer$Listener(816): RpcServer.listener,port=63282: stopping 2016-08-18 15:27:19,308 INFO [RpcServer.responder] ipc.RpcServer$Responder(1059): RpcServer.responder: stopped 2016-08-18 15:27:19,308 INFO [RpcServer.responder] ipc.RpcServer$Responder(962): RpcServer.responder: stopping 2016-08-18 15:27:19,310 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:63282-0x1569fc0e7310001, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.22.9.171,63282,1471559038490 2016-08-18 15:27:19,310 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.22.9.171,63282,1471559038490 2016-08-18 15:27:19,310 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): regionserver:63282-0x1569fc0e7310001, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-08-18 15:27:19,310 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.22.9.171,63282,1471559038490] 2016-08-18 15:27:19,311 INFO [main-EventThread] master.ServerManager(609): Cluster shutdown set; 10.22.9.171,63282,1471559038490 expired; onlineServers=1 2016-08-18 15:27:19,311 INFO [RS:0;10.22.9.171:63282] regionserver.HRegionServer(1135): stopping server 10.22.9.171,63282,1471559038490; zookeeper connection closed. 2016-08-18 15:27:19,311 INFO [RS:0;10.22.9.171:63282] regionserver.HRegionServer(1138): RS:0;10.22.9.171:63282 exiting 2016-08-18 15:27:19,311 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-08-18 15:27:19,311 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@45d000f1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(190): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@45d000f1 2016-08-18 15:27:19,311 INFO [M:0;10.22.9.171:63280] master.ServerManager(562): ZK shows there is only the master self online, exiting now 2016-08-18 15:27:19,311 DEBUG [M:0;10.22.9.171:63280] master.HMaster(1127): Stopping service threads 2016-08-18 15:27:19,311 INFO [main] util.JVMClusterUtil(317): Shutdown of 1 master(s) and 1 regionserver(s) complete 2016-08-18 15:27:19,312 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/master 2016-08-18 15:27:19,312 INFO [M:0;10.22.9.171:63280] hbase.ChoreService(323): Chore service for: 10.22.9.171,63280,1471559038246_splitLogManager_ had [] on shutdown 2016-08-18 15:27:19,312 INFO [M:0;10.22.9.171:63280] master.LogRollMasterProcedureManager(55): stop: server shutting down. 2016-08-18 15:27:19,312 INFO [M:0;10.22.9.171:63280] flush.MasterFlushTableProcedureManager(78): stop: server shutting down. 2016-08-18 15:27:19,312 INFO [M:0;10.22.9.171:63280] ipc.RpcServer(2336): Stopping server on 63280 2016-08-18 15:27:19,312 DEBUG [main-EventThread] zookeeper.ZKUtil(367): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Set watcher on znode that does not yet exist, /1/master 2016-08-18 15:27:19,312 INFO [RpcServer.listener,port=63280] ipc.RpcServer$Listener(816): RpcServer.listener,port=63280: stopping 2016-08-18 15:27:19,313 INFO [RpcServer.responder] ipc.RpcServer$Responder(1059): RpcServer.responder: stopped 2016-08-18 15:27:19,313 INFO [RpcServer.responder] ipc.RpcServer$Responder(962): RpcServer.responder: stopping 2016-08-18 15:27:19,314 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): master:63280-0x1569fc0e7310000, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.22.9.171,63280,1471559038246 2016-08-18 15:27:19,314 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.22.9.171,63280,1471559038246] 2016-08-18 15:27:19,314 INFO [M:0;10.22.9.171:63280] regionserver.HRegionServer(1135): stopping server 10.22.9.171,63280,1471559038246; zookeeper connection closed. 2016-08-18 15:27:19,314 INFO [M:0;10.22.9.171:63280] regionserver.HRegionServer(1138): M:0;10.22.9.171:63280 exiting 2016-08-18 15:27:19,331 INFO [main] zookeeper.MiniZooKeeperCluster(319): Shutdown MiniZK cluster with all ZK servers 2016-08-18 15:27:19,331 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-08-18 15:27:19,339 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-08-18 15:27:19,416 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x72a48fe4-0x1569fc0e731000f, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-18 15:27:19,416 DEBUG [10.22.9.171:63280.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(590): replicationLogCleaner-0x1569fc0e7310004, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-18 15:27:19,416 DEBUG [10.22.9.171:63280.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(679): replicationLogCleaner-0x1569fc0e7310004, quorum=localhost:61765, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-08-18 15:27:19,416 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280-EventThread] zookeeper.ZooKeeperWatcher(679): hconnection-0x72a48fe4-0x1569fc0e731000f, quorum=localhost:61765, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-08-18 15:27:19,416 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x61ff4593-0x1569fc0e731000d, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-18 15:27:19,417 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(679): hconnection-0x61ff4593-0x1569fc0e731000d, quorum=localhost:61765, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-08-18 15:27:19,416 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280-EventThread] zookeeper.ZooKeeperWatcher(590): hconnection-0x79f12b8f-0x1569fc0e731000e, quorum=localhost:61765, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-18 15:27:19,417 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=63280-EventThread] zookeeper.ZooKeeperWatcher(679): hconnection-0x79f12b8f-0x1569fc0e731000e, quorum=localhost:61765, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-08-18 15:27:19,416 DEBUG [10.22.9.171:63314.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(590): replicationLogCleaner-0x1569fc0e731000a, quorum=localhost:61765, baseZNode=/2 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-08-18 15:27:19,417 DEBUG [10.22.9.171:63314.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(679): replicationLogCleaner-0x1569fc0e731000a, quorum=localhost:61765, baseZNode=/2 Received Disconnected from ZooKeeper, ignoring 2016-08-18 15:27:19,445 WARN [DataNode: [[[DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/1d1f5955-4d11-4726-93f5-8aac3e385d0f/dfscluster_2df11bdc-bdca-4f31-985d-3dff80836cf4/dfs/data/data1/, [DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/1d1f5955-4d11-4726-93f5-8aac3e385d0f/dfscluster_2df11bdc-bdca-4f31-985d-3dff80836cf4/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:63272] datanode.BPServiceActor(704): BPOfferService for Block pool BP-1495083349-10.22.9.171-1471559035413 (Datanode Uuid 8b6ba817-439f-49df-933f-23509985671f) service to localhost/127.0.0.1:63272 interrupted 2016-08-18 15:27:19,445 WARN [DataNode: [[[DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/1d1f5955-4d11-4726-93f5-8aac3e385d0f/dfscluster_2df11bdc-bdca-4f31-985d-3dff80836cf4/dfs/data/data1/, [DISK]file:/Users/tyu/upstream-backup/hbase-server/target/test-data/1d1f5955-4d11-4726-93f5-8aac3e385d0f/dfscluster_2df11bdc-bdca-4f31-985d-3dff80836cf4/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:63272] datanode.BPServiceActor(835): Ending block pool service for: Block pool BP-1495083349-10.22.9.171-1471559035413 (Datanode Uuid 8b6ba817-439f-49df-933f-23509985671f) service to localhost/127.0.0.1:63272 2016-08-18 15:27:19,517 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-08-18 15:27:19,658 INFO [main] hbase.HBaseTestingUtility(1155): Minicluster is down 2016-08-18 15:27:19,658 INFO [main] hbase.HBaseTestingUtility(2498): Stopping mini mapreduce cluster... 2016-08-18 15:27:19,664 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:0 2016-08-18 15:27:21,955 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-jobhistoryserver.properties,hadoop-metrics2.properties 2016-08-18 15:27:21,970 ERROR [HBase-Metrics2-1] lib.MethodMetric$2(118): Error invoking method getBlocksTotal java.lang.reflect.InvocationTargetException at sun.reflect.GeneratedMethodAccessor146.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.metrics2.lib.MethodMetric$2.snapshot(MethodMetric.java:111) at org.apache.hadoop.metrics2.lib.MethodMetric.snapshot(MethodMetric.java:144) at org.apache.hadoop.metrics2.lib.MetricsRegistry.snapshot(MetricsRegistry.java:401) at org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:79) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:194) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:172) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:151) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:57) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:220) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:96) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:270) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$1.postStart(MetricsSystemImpl.java:240) at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$3.invoke(MetricsSystemImpl.java:322) at com.sun.proxy.$Proxy14.postStart(Unknown Source) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:194) at org.apache.hadoop.metrics2.impl.JmxCacheBuster$JmxCacheBusterRunnable.run(JmxCacheBuster.java:78) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NullPointerException at org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.size(BlocksMap.java:203) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.getTotalBlocks(BlockManager.java:3370) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlocksTotal(FSNamesystem.java:5729) ... 32 more 2016-08-18 15:27:21,981 ERROR [HBase-Metrics2-1] lib.MethodMetric$2(118): Error invoking method getBlocksTotal java.lang.reflect.InvocationTargetException at sun.reflect.GeneratedMethodAccessor146.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.metrics2.lib.MethodMetric$2.snapshot(MethodMetric.java:111) at org.apache.hadoop.metrics2.lib.MethodMetric.snapshot(MethodMetric.java:144) at org.apache.hadoop.metrics2.lib.MetricsRegistry.snapshot(MetricsRegistry.java:401) at org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:79) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:194) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:172) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:151) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:57) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:220) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:96) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:270) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$1.postStart(MetricsSystemImpl.java:240) at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$3.invoke(MetricsSystemImpl.java:322) at com.sun.proxy.$Proxy14.postStart(Unknown Source) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:194) at org.apache.hadoop.metrics2.impl.JmxCacheBuster$JmxCacheBusterRunnable.run(JmxCacheBuster.java:78) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NullPointerException at org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.size(BlocksMap.java:203) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.getTotalBlocks(BlockManager.java:3370) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlocksTotal(FSNamesystem.java:5729) ... 32 more 2016-08-18 15:27:21,981 INFO [Socket Reader #1 for port 63355] ipc.Server$Connection(1316): Auth successful for appattempt_1471559057429_0003_000001 (auth:SIMPLE) 2016-08-18 15:27:33,803 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:0 2016-08-18 15:27:47,828 ERROR [Thread[Thread-635,5,main]] delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover(659): ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2016-08-18 15:27:47,829 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:0 2016-08-18 15:27:47,937 WARN [ApplicationMaster Launcher] amlauncher.ApplicationMasterLauncher$LauncherThread(122): org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher$LauncherThread interrupted. Returning. 2016-08-18 15:27:47,940 ERROR [ResourceManager Event Processor] resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor(666): Returning, interrupted : java.lang.InterruptedException 2016-08-18 15:27:47,940 ERROR [Thread[Thread-466,5,main]] delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover(659): ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2016-08-18 15:27:47,944 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@tyus-macbook-pro.local:0 2016-08-18 15:27:48,047 ERROR [Thread[Thread-445,5,main]] delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover(659): ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2016-08-18 15:27:48,047 INFO [main] hbase.HBaseTestingUtility(2501): Mini mapreduce cluster stopped 2016-08-18 15:27:48,052 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@238bb76a 2016-08-18 15:27:48,052 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished. 2016-08-18 15:27:48,052 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@238bb76a 2016-08-18 15:27:48,052 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished. 2016-08-18 15:27:48,052 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@238bb76a 2016-08-18 15:27:48,052 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished. 2016-08-18 15:27:48,052 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@238bb76a 2016-08-18 15:27:48,052 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(120): Starting fs shutdown hook thread. 2016-08-18 15:27:48,060 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished.